Asset and Risk Management
For other titles in the Wiley Finance Series please see www.wiley.com/ﬁnance
Asset and Ri...

This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!

Asset and Risk Management

For other titles in the Wiley Finance Series please see www.wiley.com/ﬁnance

Asset and Risk Management Risk Oriented Finance

Louis Esch, Robert Kieffer and Thierry Lopez C. Berb´e, P. Damel, M. Debay, J.-F. Hannosset

Published by

John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England Telephone (+44) 1243 779777

Copyright 2005 De Boeck & Larcier s.a. Editions De Boeck Universit´e Rue des Minimes 39, B-1000 Brussels First printed in French by De Boeck & Larcier s.a. – ISBN: 2-8041-3309-5 Email (for orders and customer service enquiries): [email protected] Visit our Home Page on www.wileyeurope.com or www.wiley.com All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except under the terms of the Copyright, Designs and Patents Act 1988 or under the terms of a licence issued by the Copyright Licensing Agency Ltd, 90 Tottenham Court Road, London W1T 4LP, UK, without the permission in writing of the Publisher. Requests to the Publisher should be addressed to the Permissions Department, John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England, or emailed to [email protected], or faxed to (+44) 1243 770620. Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The Publisher is not associated with any product or vendor mentioned in this book. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the Publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought. Other Wiley Editorial Ofﬁces John Wiley & Sons Inc., 111 River Street, Hoboken, NJ 07030, USA Jossey-Bass, 989 Market Street, San Francisco, CA 94103-1741, USA Wiley-VCH Verlag GmbH, Boschstr. 12, D-69469 Weinheim, Germany John Wiley & Sons Australia Ltd, 33 Park Road, Milton, Queensland 4064, Australia John Wiley & Sons (Asia) Pte Ltd, 2 Clementi Loop #02-01, Jin Xing Distripark, Singapore 129809 John Wiley & Sons Canada Ltd, 22 Worcester Road, Etobicoke, Ontario, Canada M9W 1L1 Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books. Library of Congress Cataloging-in-Publication Data Esch, Louis. Asset and risk management : risk oriented ﬁnance / Louis Esch, Robert Kieffer, and Thierry Lopez. p. cm. Includes bibliographical references and index. ISBN 0-471-49144-6 (cloth : alk. paper) 1. Investment analysis. 2. Asset-liability management. 3. Risk management. I. Kieffer, Robert. II. Lopez, Thierry. III. Title. HG4529.E83 2005 332.63 2042—dc22 2004018708 British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library ISBN 0-471-49144-6 Typeset in 10/12pt Times by Laserwords Private Limited, Chennai, India Printed and bound in Great Britain by Antony Rowe Ltd, Chippenham, Wiltshire This book is printed on acid-free paper responsibly manufactured from sustainable forestry in which at least two trees are planted for each one used for paper production.

Contents Collaborators Foreword by Philippe Jorion

xiii xv

Acknowledgements

xvii

Introduction Areas covered Who is this book for?

xix xix xxi

PART I THE MASSIVE CHANGES IN THE WORLD OF FINANCE Introduction 1 The Regulatory Context 1.1 Precautionary surveillance 1.2 The Basle Committee 1.2.1 General information 1.2.2 Basle II and the philosophy of operational risk 1.3 Accounting standards 1.3.1 Standard-setting organisations 1.3.2 The IASB 2 Changes in Financial Risk Management 2.1 Deﬁnitions 2.1.1 Typology of risks 2.1.2 Risk management methodology 2.2 Changes in ﬁnancial risk management 2.2.1 Towards an integrated risk management 2.2.2 The ‘cost’ of risk management 2.3 A new risk-return world 2.3.1 Towards a minimisation of risk for an anticipated return 2.3.2 Theoretical formalisation

1 2 3 3 3 3 5 9 9 9 11 11 11 19 21 21 25 26 26 26

vi

Contents

PART II EVALUATING FINANCIAL ASSETS Introduction 3

4

29 30

Equities 3.1 The basics 3.1.1 Return and risk 3.1.2 Market efﬁciency 3.1.3 Equity valuation models 3.2 Portfolio diversiﬁcation and management 3.2.1 Principles of diversiﬁcation 3.2.2 Diversiﬁcation and portfolio size 3.2.3 Markowitz model and critical line algorithm 3.2.4 Sharpe’s simple index model 3.2.5 Model with risk-free security 3.2.6 The Elton, Gruber and Padberg method of portfolio management 3.2.7 Utility theory and optimal portfolio selection 3.2.8 The market model 3.3 Model of ﬁnancial asset equilibrium and applications 3.3.1 Capital asset pricing model 3.3.2 Arbitrage pricing theory 3.3.3 Performance evaluation 3.3.4 Equity portfolio management strategies 3.4 Equity dynamic models 3.4.1 Deterministic models 3.4.2 Stochastic models

35 35 35 44 48 51 51 55 56 69 75 79 85 91 93 93 97 99 103 108 108 109

Bonds 4.1 Characteristics and valuation 4.1.1 Deﬁnitions 4.1.2 Return on bonds 4.1.3 Valuing a bond 4.2 Bonds and ﬁnancial risk 4.2.1 Sources of risk 4.2.2 Duration 4.2.3 Convexity 4.3 Deterministic structure of interest rates 4.3.1 Yield curves 4.3.2 Static interest rate structure 4.3.3 Dynamic interest rate structure 4.3.4 Deterministic model and stochastic model 4.4 Bond portfolio management strategies 4.4.1 Passive strategy: immunisation 4.4.2 Active strategy 4.5 Stochastic bond dynamic models 4.5.1 Arbitrage models with one state variable 4.5.2 The Vasicek model

115 115 115 116 119 119 119 121 127 129 129 130 132 134 135 135 137 138 139 142

Contents

4.5.3 The Cox, Ingersoll and Ross model 4.5.4 Stochastic duration 5 Options 5.1 Deﬁnitions 5.1.1 Characteristics 5.1.2 Use 5.2 Value of an option 5.2.1 Intrinsic value and time value 5.2.2 Volatility 5.2.3 Sensitivity parameters 5.2.4 General properties 5.3 Valuation models 5.3.1 Binomial model for equity options 5.3.2 Black and Scholes model for equity options 5.3.3 Other models of valuation 5.4 Strategies on options 5.4.1 Simple strategies 5.4.2 More complex strategies PART III GENERAL THEORY OF VaR Introduction

vii

145 147 149 149 149 150 153 153 154 155 157 160 162 168 174 175 175 175 179 180

6 Theory of VaR 6.1 The concept of ‘risk per share’ 6.1.1 Standard measurement of risk linked to ﬁnancial products 6.1.2 Problems with these approaches to risk 6.1.3 Generalising the concept of ‘risk’ 6.2 VaR for a single asset 6.2.1 Value at Risk 6.2.2 Case of a normal distribution 6.3 VaR for a portfolio 6.3.1 General results 6.3.2 Components of the VaR of a portfolio 6.3.3 Incremental VaR

181 181 181 181 184 185 185 188 190 190 193 195

7 VaR Estimation Techniques 7.1 General questions in estimating VaR 7.1.1 The problem of estimation 7.1.2 Typology of estimation methods 7.2 Estimated variance–covariance matrix method 7.2.1 Identifying cash ﬂows in ﬁnancial assets 7.2.2 Mapping cashﬂows with standard maturity dates 7.2.3 Calculating VaR 7.3 Monte Carlo simulation 7.3.1 The Monte Carlo method and probability theory 7.3.2 Estimation method

199 199 199 200 202 203 205 209 216 216 218

viii

Contents

7.4 Historical simulation 7.4.1 Basic methodology 7.4.2 The contribution of extreme value theory 7.5 Advantages and drawbacks 7.5.1 The theoretical viewpoint 7.5.2 The practical viewpoint 7.5.3 Synthesis 8

Setting Up a VaR Methodology 8.1 Putting together the database 8.1.1 Which data should be chosen? 8.1.2 The data in the example 8.2 Calculations 8.2.1 Treasury portfolio case 8.2.2 Bond portfolio case 8.3 The normality hypothesis

PART IV FROM RISK MANAGEMENT TO ASSET MANAGEMENT Introduction 9

224 224 230 234 235 238 241 243 243 243 244 244 244 250 252 255 256

Portfolio Risk Management 9.1 General principles 9.2 Portfolio risk management method 9.2.1 Investment strategy 9.2.2 Risk framework

257 257 257 258 258

10

Optimising the Global Portfolio via VaR 10.1 Taking account of VaR in Sharpe’s simple index method 10.1.1 The problem of minimisation 10.1.2 Adapting the critical line algorithm to VaR 10.1.3 Comparison of the two methods 10.2 Taking account of VaR in the EGP method 10.2.1 Maximising the risk premium 10.2.2 Adapting the EGP method algorithm to VaR 10.2.3 Comparison of the two methods 10.2.4 Conclusion 10.3 Optimising a global portfolio via VaR 10.3.1 Generalisation of the asset model 10.3.2 Construction of an optimal global portfolio 10.3.3 Method of optimisation of global portfolio

265 266 266 267 269 269 269 270 271 272 274 275 277 278

11

Institutional Management: APT Applied to Investment Funds 11.1 Absolute global risk 11.2 Relative global risk/tracking error 11.3 Relative fund risk vs. benchmark abacus 11.4 Allocation of systematic risk

285 285 285 287 288

Contents

11.4.1 Independent allocation 11.4.2 Joint allocation: ‘value’ and ‘growth’ example 11.5 Allocation of performance level 11.6 Gross performance level and risk withdrawal 11.7 Analysis of style PART V FROM RISK MANAGEMENT TO ASSET AND LIABILITY MANAGEMENT Introduction

ix

288 289 289 290 291

293 294

12 Techniques for Measuring Structural Risks in Balance Sheets 12.1 Tools for structural risk analysis in asset and liability management 12.1.1 Gap or liquidity risk 12.1.2 Rate mismatches 12.1.3 Net present value (NPV) of equity funds and sensitivity 12.1.4 Duration of equity funds 12.2 Simulations 12.3 Using VaR in ALM 12.4 Repricing schedules (modelling of contracts with ﬂoating rates) 12.4.1 The conventions method 12.4.2 The theoretical approach to the interest rate risk on ﬂoating rate products, through the net current value 12.4.3 The behavioural study of rate revisions 12.5 Replicating portfolios 12.5.1 Presentation of replicating portfolios 12.5.2 Replicating portfolios constructed according to convention 12.5.3 The contract-by-contract replicating portfolio 12.5.4 Replicating portfolios with the optimal value method

295 295 296 297 298 299 300 301 301 301 302 303 311 312 313 314 316

APPENDICES

323

Appendix 1 Mathematical Concepts 1.1 Functions of one variable 1.1.1 Derivatives 1.1.2 Taylor’s formula 1.1.3 Geometric series 1.2 Functions of several variables 1.2.1 Partial derivatives 1.2.2 Taylor’s formula 1.3 Matrix calculus 1.3.1 Deﬁnitions 1.3.2 Quadratic forms

325 325 325 327 328 329 329 331 332 332 334

Appendix 2 Probabilistic Concepts 2.1 Random variables 2.1.1 Random variables and probability law 2.1.2 Typical values of random variables

339 339 339 343

x

Contents

2.2 Theoretical distributions 2.2.1 Normal distribution and associated ones 2.2.2 Other theoretical distributions 2.3 Stochastic processes 2.3.1 General considerations 2.3.2 Particular stochastic processes 2.3.3 Stochastic differential equations

347 347 350 353 353 354 356

Appendix 3 Statistical Concepts 3.1 Inferential statistics 3.1.1 Sampling 3.1.2 Two problems of inferential statistics 3.2 Regressions 3.2.1 Simple regression 3.2.2 Multiple regression 3.2.3 Nonlinear regression

359 359 359 360 362 362 363 364

Appendix 4 Extreme Value Theory 4.1 Exact result 4.2 Asymptotic results 4.2.1 Extreme value theorem 4.2.2 Attraction domains 4.2.3 Generalisation

365 365 365 365 366 367

Appendix 5 Canonical Correlations 5.1 Geometric presentation of the method 5.2 Search for canonical characters

369 369 369

Appendix 6

371

Algebraic Presentation of Logistic Regression

Appendix 7 Time Series Models: ARCH-GARCH and EGARCH 7.1 ARCH-GARCH models 7.2 EGARCH models

373 373 373

Appendix 8 Numerical Methods for Solving Nonlinear Equations 8.1 General principles for iterative methods 8.1.1 Convergence 8.1.2 Order of convergence 8.1.3 Stop criteria 8.2 Principal methods 8.2.1 First order methods 8.2.2 Newton–Raphson method 8.2.3 Bisection method

375 375 375 376 376 377 377 379 380

Contents

8.3 Nonlinear equation systems 8.3.1 General theory of n-dimensional iteration 8.3.2 Principal methods

xi

380 381 381

Bibliography

383

Index

389

Collaborators

Christian Berb´e, Civil engineer from Universit´e libre de Bruxelles and ABAF ﬁnancial analyst. Previously a director at PricewaterhouseCoopers Consulting in Luxembourg, he is a ﬁnancial risk management specialist currently working as a wealth manager with Bearbull (Degroof Group). Pascal Damel, Doctor of management science from the University of Nancy, is conference master for management science at the IUT of Metz, an independent risk management consultant and ALM. Michel Debay, Civil engineer and physicist of the University of Li`ege and master of ﬁnance and insurance at the High Business School in Li`ege (HEC), currently heads the Data Warehouse Unit at SA Kredietbank in Luxembourg. Jean-Fran¸cois Hannosset, Actuary of the Catholic University of Louvain, currently manages the insurance department at Banque Degroof Luxembourg SA, and is director of courses at the Luxembourg Institute of Banking Training.

Foreword by Philippe Jorion

Risk management has truly undergone a revolution in the last decade. It was just over 10 years ago, in July 1993, that the Group of 30 (G-30) ofﬁcially promulgated best practices for the management of derivatives.1 Even though the G-30 issued its report in response to the string of derivatives disasters of the early 1990s, these best practices apply to all ﬁnancial instruments, not only derivatives. This was the ﬁrst time the term ‘Value-at-Risk’ (VaR) was publicly and widely mentioned. By now, VaR has become the standard benchmark for measuring ﬁnancial risk. All major banks dutifully report their VaR in quarterly or annual ﬁnancial reports. Modern risk measurement methods are not new, however. They go back to the concept of portfolio risk developed by Harry Markowitz in 1952. Markowitz noted that investors should be interested in total portfolio risk and that ‘diversiﬁcation is both observed and sensible’. He provided tools for portfolio selection. The new aspect of the VaR revolution is the application of consistent methods to measure market risk across the whole institution or portfolio, across products and business lines. These methods are now being extended to credit risk, operational risk, and to the ﬁnal frontier of enterprise-wide risk. Still, risk measurement is too often limited to a passive approach, which is to measure or to control. Modern risk-measurement techniques are much more useful than that. They can be used to manage the portfolio. Consider a portfolio manager with a myriad of securities to select from. The manager should have strong opinions on most securities. Opinions, or expected returns on individual securities, aggregate linearly into the portfolio expected return. So, assessing the effect of adding or subtracting securities on the portfolio expected return is intuitive. Risk, however, does not aggregate in a linear fashion. It depends on the number of securities, on individual volatilities and on all correlations. Risk-measurement methods provide tools such as marginal VaR, component VaR, and incremental VaR, that help the portfolio manager to decide on the best trade-off between risk and return. Take a situation where a manager considers adding two securities to the portfolio. Both have the same expected return. The ﬁrst, however, has negative marginal VaR; the second has positive marginal VaR. In other words, the addition of the ﬁrst security will reduce the 1 The G-30 is a private, nonproﬁt association, founded in 1978 and consisting of senior representatives of the private and public sectors and academia. Its main purpose is to affect the policy debate on international economic and ﬁnancial issues. The G-30 regularly publishes papers. See www.group30.org.

xvi

Foreword

portfolio risk; the second will increase the portfolio risk. Clearly, adding the ﬁrst security is the better choice. It will increase the portfolio expected return and decrease its risk. Without these tools, it is hard to imagine how to manage the portfolio. As an aside, it is often easier to convince top management of investing in risk-measurement systems when it can be demonstrated they can add value through better portfolio management. Similar choices appear at the level of the entire institution. How does a bank decide on its capital structure, that is, on the amount of equity it should hold to support its activities? Too much equity will reduce its return on equity. Too little equity will increase the likelihood of bankruptcy. The answer lies in risk-measurement methods: The amount of equity should provide a buffer adequate against all enterprise-wide risks at a high conﬁdence level. Once risks are measured, they can be decomposed and weighted against their expected proﬁts. Risks that do not generate high enough payoffs can be sold off or hedged. In the past, such trade-offs were evaluated in an ad-hoc fashion. This book provides tools for going from risk measurement to portfolio or asset management. I applaud the authors for showing how to integrate VaR-based measures in the portfolio optimisation process, in the spirit of Markowitz’s portfolio selection problem. Once risks are measured, they can be managed better. Philippe Jorion University of California at Irvine

Acknowledgements

We want to acknowledge the help received in the writing of this book. In particular, we would like to thank Michael May, managing director, Bank of Bermuda Luxembourg S.A. and Christel Glaude, Group Risk Management at KBL Group European Private Bankers.

Part I The Massive Changes in the World of Finance

Introduction 1 The Regulatory Context 2 Changes in Financial Risk Management

2

Asset and Risk Management

Introduction The ﬁnancial world of today has three main aspects: • An insurance market that is tense, mainly because of the events of 11 September 2001 and the claims that followed them. • Pressure of regulations, which are compelling the banks to quantify and reduce the risks hitherto not considered particular to banks (that is, operational risks). • A prolonged ﬁnancial crisis together with a crisis of conﬁdence, which is pressurising the ﬁnancial institutions to manage their costs ever more carefully. Against this background, the risk management function is becoming more and more important in the ﬁnance sector as a whole, increasing the scope of its skills and giving the decision-makers a contribution that is mostly strategic in nature. The most notable result of this is that the perception of cost is currently geared towards the creation of value, while as recently as ﬁve years ago, shareholders’ perceptions were too heavily weighted in the direction of the ‘cost of doing business’. It is these subjects that we propose to develop in the ﬁrst two chapters.

1 The Regulatory Context 1.1 PRECAUTIONARY SURVEILLANCE One of the aims of precautionary surveillance is to increase the quality of risk management in ﬁnancial institutions. Generally speaking: • Institutions whose market activity is signiﬁcant in terms of contribution to results or expenditure of equity fund cover need to set up a risk management function that is independent of the ‘front ofﬁce’ and ‘back ofﬁce’ functions. • When the establishment in question is a consolidating business, it must be a decisionmaking centre. The risk management function will then be responsible for suggesting a group-wide policy for the monitoring of risks. The management committee then takes the risk management policy decisions for the group as a whole. • To do this, the establishment must have adequate ﬁnancial and infrastructural resources for managing the risk. The risk management function must have systems for assessing positions and measuring risks, as well as adequate limit systems and human resources. The aim of precautionary surveillance is to: • Promote a well-thought-out and prudent business policy. • Protect the ﬁnancial stability of the businesses overseen and of the ﬁnancial sector as a whole. • Ensure that the organisation and the internal control systems are of suitable quality. • Strengthen the quality of risk management.

1.2 THE BASLE COMMITTEE We do not propose to enter into methodological details on the adequacy1 of equity capital in relation to credit, market and operational risks. On the other hand, we intend to spend some time examining the underlying philosophy of the work of the Basle Committee2 on banking controls, paying particular attention to the qualitative dynamic (see 1.2.2 below) on the matter of operational risks. 1.2.1 General information The Basle Committee on Banking Supervision is a committee of banking supervisory authorities, which was established by the central bank governors of the Group of Ten countries in 1975. It consists of senior representatives of bank supervisory authorities and central banks from Belgium, 1 Interested readers should read P. Jorion, Financial Risk Manager Handbook (Second Edition), John Wiley & Sons, Inc. 2003, and in particular its section on regulation and compliance. 2 Interested readers should consult http://www.bis.org/index.htm.

4

Asset and Risk Management Canada, France, Germany, Italy, Japan, Luxembourg, the Netherlands, Sweden, Switzerland, the United Kingdom and the United States. It usually meets at the Bank for International Settlements in Basle, where its permanent Secretariat is located.3

1.2.1.1 The current situation The aim of the capital adequacy ratio is to ensure that the establishment has sufﬁcient equity capital in relation to credit and market risks. The ratio compares the eligible equity capital with overall equity capital requirements (on a consolidated basis where necessary) and must total or exceed 100 % (or 8 % if the denominator is multiplied by 12.5). Two methods, one standard and the other based on the internal models, allow the requirements in question to be calculated. In addition, the aim of overseeing and supervising major risks is to ensure that the credit risk is suitably diversiﬁed within the banking portfolios (on a consolidated basis where necessary). 1.2.1.2 The point of the ‘New Accord’4 The Basle Committee on Banking Supervision has decided to undertake a second round of consultation on more detailed capital adequacy framework proposals that, once ﬁnalised, will replace the 1988 Accord, as amended. The new framework is intended to align capital adequacy assessment more closely with the key elements of banking risks and to provide incentives for banks to enhance their risk measurement and management capabilities.

The Committee’s ongoing work has afﬁrmed the importance of the three pillars of the new framework: 1. Minimum capital requirements. 2. Supervisory review process. 3. Market discipline. A. First aspect: minimum capital requirements The primary changes to the minimum capital requirements set out in the 1988 Accord are in the approach to credit risk and in the inclusion of explicit capital requirements for operational risk. A range of risk-sensitive options for addressing both types of risk is elaborated. For credit risk, this range begins with the standardised approach and extends to the “foundation” and “advanced” internal ratings-based (IRB) approaches. A similar structure is envisaged for operational risk. These evolutionary approaches will motivate banks to continuously improve their risk management and measurement capabilities so as to avail themselves of the more risk-sensitive methodologies and thus more accurate capital requirements.

B. Second aspect: supervisory review process The Committee has decided to treat interest rate risk in the banking book under Pillar 2 (supervisory review process). Given the variety of underlying assumptions needed, the Committee 3 The Bank for International Settlements, Basle Committee on Banking Supervision, Vue d’ensemble du Nouvel accord de Bˆale sur les fonds propres, Basle, January 2001, p. 1. 4 Interested readers should also consult: The Bank for International Settlements, Basle Committee on Banking Control, The New Basle Capital Accord, January 2001; and The Bank for International Settlements, Basle Committee on Banking Control, The New Basle Capital Accord: An Explanatory Note, January 2001.

The Regulatory Context

5

believes that a better and more risk-sensitive treatment can be achieved through the supervisory review process rather than through minimum capital requirements. Under the second pillar of the New Accord, supervisors should ensure that each bank has sound internal processes in place to assess the adequacy of its capital based on a thorough evaluation of its risks. The new framework stresses the importance of bank’s management developing an internal capital assessment process and setting targets for capital that are commensurate with the bank’s particular risk proﬁle and control environment.

C. Third aspect: Market discipline The Committee regards the bolstering of market discipline through enhanced disclosure as a fundamental part of the New Accord.5 The Committee believes the disclosure requirements and recommendations set out in the second consultative package will allow market participants to assess key pieces of information on the scope of application of the revised Accord, capital, risk exposures, assessment and management processes, and capital adequacy of banks. The risk-sensitive approaches developed by the Committee rely extensively on banks’ internal methodologies giving banks more discretion in calculating their capital requirements. Separate disclosure requirements are put forth as prerequisites for supervisory recognition of internal methodologies for credit risk, credit risk mitigation techniques and asset securitisation. In the future, disclosure prerequisites will also attach to advanced approaches to operational risk. In the view of the Committee, effective disclosure is essential to ensure that market participants can better understand banks’ risk proﬁles and the adequacy of their capital positions.

1.2.2 Basle II and the philosophy of operational risk6 In February 2003, the Basle Committee published a new version of the document Sound Practices for the Management and Supervision of Operational Risk. It contains a set of principles that make up a structure for managing and supervising operational risks for banks and their regulators. In fact, risks other than the credit and market risks can become more substantial as the deregulation and globalisation of ﬁnancial services and the increased sophistication of ﬁnancial technology increase the complexity of the banks’ activities and therefore that of their risk proﬁle. By way of example, the following can be cited: • The increased use of automated technology, which if not suitably controlled, can transform the risk of an error during manual data capture into a system breakdown risk. • The effects of e-business. • The effects of mergers and acquisitions on system integration. • The emergence of banks that offer large-scale services and the technical nature of the high-performance back-up mechanisms to be put in place. 5

See also Point 1.3, which deals with accounting standards. This section is essentially a summary of the following publication: The Bank for International Settlements, Basle Committee on Banking Control, Sound Practices for the Management and Supervision of Operational Risk, Basle, February 2003. In addition, interested readers can also consult: Cruz M. G., Modelling, Measuring and Hedging Operational Risk, John Wiley & Sons, Ltd, 2003; Hoffman D. G., Managing Operational Risk: 20 Firm-Wide Best Practice Strategies, John Wiley & Sons, Inc., 2002; and Marshall C., Measuring and Managing Operational Risks in Financial Institutions, John Wiley & Sons, Inc., 2001. 6

6

Asset and Risk Management

• The use of collateral,7 credit derivatives, netting and conversion into securities, with the aim of reducing certain risks but the likelihood of creating other kinds of risk (for example, the legal risk – on this matter, see Point 2.2.1.4 in the section on ‘Positioning the legal risk’). • Increased recourse to outsourcing and participation in clearing systems. 1.2.2.1 A precise deﬁnition? Operational risk, therefore, generally and according to the Basle Committee speciﬁcally, is deﬁned as ‘the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events’. This is a very wide deﬁnition, which includes legal risk but excludes strategic and reputational risk. The Committee emphasises that the precise approach chosen by a bank in the management of its operational risks depends on many different factors (size, level of sophistication, nature and complexity of operations, etc.). Nevertheless, it provides a more precise deﬁnition by adding that despite these differences, clear strategies supervised by the board of directors and management committee, a solid ‘operational risk’ and ‘internal control’ culture (including among other things clearly deﬁned responsibilities and demarcation of tasks), internal reporting, and plans for continuity8 following a highly damaging event, are all elements of paramount importance in an effective operational risk management structure for banks, regardless of their size and environment. Although the deﬁnition of operational risk varies de facto between ﬁnancial institutions, it is still a certainty that some types of event, as listed by the Committee, have the potential to create substantial losses: • • • • • • •

Internal fraud (for example, insider trading of an employee’s own account). External fraud (such as forgery). Workplace safety. All matters linked to customer relations (for example, money laundering). Physical damage to buildings (terrorism, vandalism etc.). Telecommunication problems and system failures. Process management (input errors, unsatisfactory legal documentation etc.).

1.2.2.2 Sound practices The sound practices proposed by the Committee are based on four major themes (and are subdivided into 10 principles): • Development of an appropriate risk management environment. • Identiﬁcation, assessment, monitoring, control and mitigation in a risk management context. • The role of supervisors. • The role of disclosure. 7 8

On this subject, see 2.1.1.4. On this subject, see 2.1.1.3.

The Regulatory Context

7

Developing an appropriate risk management environment Operational risk management is ﬁrst and foremost an organisational issue. The greater the relative importance of ethical behaviour at all levels within an institution, the more the risk management is optimised. The ﬁrst principle is as follows. The board of directors should be aware of the major aspects of the bank’s operational risks as a distinct risk category that should be managed, and it should approve and periodically review the bank’s operational risk management framework. The framework should provide a ﬁrm-wide deﬁnition of operational risk and lay down the principles of how operational risk is to be identiﬁed, assessed, monitored, and controlled/mitigated. In addition (second principle), the board of directors should ensure that the bank’s operational risk management framework is subject to effective and comprehensive internal audit9 by operationally independent, appropriately trained and competent staff. The internal audit function should not be directly responsible for operational risk management. This independence may be compromised if the audit function is directly involved in the operational risk management process. In practice, the Committee recognises that the audit function at some banks (particularly smaller banks) may have initial responsibility for developing an operational risk management programme. Where this is the case, banks should see that responsibility for day-to-day operational risk management is transferred elsewhere in a timely manner. In the third principle senior management should have responsibility for implementing the operational risk management framework approved by the board of directors. The framework should be consistently implemented throughout the whole banking organisation, and all levels of staff should understand their responsibilities with respect to operational risk management. Senior management should also have responsibility for developing policies, processes and procedures for managing operational risk in all of the bank’s material products, activities, processes and systems. Risk management: Identiﬁcation, assessment, monitoring and mitigation/control The fourth principle states that banks should identify and assess the operational risk inherent in all material products, activities, processes and systems. Banks should also ensure that before new products, activities, processes and systems are introduced or undertaken, the operational risk inherent in them is subject to adequate assessment procedures. Amongst the possible tools used by banks for identifying and assessing operational risk are: • Self- or risk-assessment. A bank assesses its operations and activities against a menu of potential operational risk vulnerabilities. This process is internally driven and often incorporates checklists and/or workshops to identify the strengths and weaknesses of the operational risk environment. Scorecards, for example, provide a means of translating qualitative assessments into quantitative metrics that give a relative ranking of different types of operational risk exposures. Some scores may relate to risks unique to a speciﬁc business line while others may rank risks that cut across business lines. Scores may address inherent risks, as well as the controls to mitigate them. In addition, scorecards may be used by banks to allocate economic capital to business lines in relation to performance in managing and controlling various aspects of operational risk. 9

See 2.2.1.3.

8

Asset and Risk Management

• Risk mapping. In this process, various business units, organisational functions or process ﬂows are mapped by risk type. This exercise can reveal areas of weakness and help prioritise subsequent management action. • Risk indicators. Risk indicators are statistics and/or metrics, often ﬁnancial, which can provide insight into a bank’s risk position. These indicators tend to be reviewed on a periodic basis (such as monthly or quarterly) to alert banks to changes that may be indicative of risk concerns. Such indicators may include the number of failed trades, staff turnover rates and the frequency and/or severity of errors and omissions. • Measurement. Some ﬁrms have begun to quantify their exposure to operational risk using a variety of approaches. For example, data on a bank’s historical loss experience could provide meaningful information for assessing the bank’s exposure to operational risk. In its ﬁfth principle, the Committee asserts that banks should implement a process to regularly monitor operational risk proﬁles and material exposures to losses. There should be regular reporting of pertinent information to senior management and the board of directors that supports the proactive management of operational risk. In addition (sixth principle), banks should have policies, processes and procedures to control and/or mitigate material operational risks. Banks should periodically review their risk limitation and control strategies and should adjust their operational risk proﬁle accordingly using appropriate strategies, in light of their overall risk appetite and proﬁle. The seventh principle states that banks should have in place contingency and business continuity plans to ensure their ability to operate on an ongoing basis and limit losses in the event of severe business disruption. Role of supervisors In the eighth principle banking supervisors should require that all banks, regardless of size, have an effective framework in place to identify, assess, monitor and control/mitigate material operational risks as part of an overall approach to risk management. In the ninth principle supervisors should conduct, directly or indirectly, regular independent evaluation of a bank’s policies, procedures and practices related to operational risks. Supervisors should ensure that there are appropriate mechanisms in place which allow them to remain apprised of developments at banks. Examples of what an independent evaluation of operational risk by supervisors should review include the following: • The effectiveness of the bank’s risk management process and overall control environment with respect to operational risk; • The bank’s methods for monitoring and reporting its operational risk proﬁle, including data on operational losses and other indicators of potential operational risk; • The bank’s procedures for the timely and effective resolution of operational risk events and vulnerabilities; • The bank’s process of internal controls, reviews and audit to ensure the integrity of the overall operational risk management process; • The effectiveness of the bank’s operational risk mitigation efforts, such as the use of insurance; • The quality and comprehensiveness of the bank’s disaster recovery and business continuity plans; and

The Regulatory Context

9

• The bank’s process for assessing overall capital adequacy for operational risk in relation to its risk proﬁle and, if appropriate, its internal capital targets. Role of disclosure Banks should make sufﬁcient public disclosure to allow market participants to assess their approach to operational risk management.

1.3 ACCOUNTING STANDARDS The ﬁnancial crisis that started in some Asian countries in 1998 and subsequently spread to other locations in the world revealed a need for reliable and transparent ﬁnancial reporting, so that investors and regulators could take decisions with a full knowledge of the facts. 1.3.1 Standard-setting organisations10 Generally speaking, three main standard-setting organisations are recognised in the ﬁeld of accounting: • The IASB (International Accounting Standards Board), dealt with below in 1.3.2. • The IFAC (International Federation of Accountants). • The FASB (Financial Accounting Standards Board). The International Federation of Accountants, or IFAC,11 is an organisation based in New York that combines a number of professional accounting organisations from various countries. Although the IASB concentrates on accounting standards, the aim of the IFAC is to promote the accounting profession and harmonise professional standards on a worldwide scale. In the United States, the standard-setting organisation is the Financial Accounting Standards Board or FASB.12 Although it is part of the IASB, the FASB has its own standards. Part of the FASB’s mandate is, however, to work together with the IASB in establishing worldwide standards, a process that is likely to take some time yet. 1.3.2 The IASB13 In 1998 the ministers of ﬁnance and governors of the central banks from the G7 nations decided that private enterprises in their countries should comply with standards, principles and good practice codes decided at international level. They then called on all the countries involved in the global capital markets to comply with these standards, principles and practices. Many countries have now committed themselves, including most notably the European Union, where the Commission is making giant strides towards creating an obligation for all quoted companies, to publish their consolidated ﬁnancial reports in compliance with IAS standards. The IASB or International Standards Accounting Board is a private, independent standard-setting body based in London. In the public interest, the IASB has developed 10

http://www.cga-canada.org/fr/magazine/nov-dec02/Cyberguide f.htm. Interested readers should consult http://www.ifac.org. 12 Interested readers should consult http://www.fasb.org. 13 Interested readers should consult http://www.iasc.org.uk/cmt/0001.asp. 11

10

Asset and Risk Management

a set of standardised accounting rules that are of high quality and easily understandable (known as the IAS Standards). Financial statements must comply with these rules in order to ensure suitable transparency and information value for their readers. Particular reference is made to Standard IAS 39 relating to ﬁnancial instruments, which is an expression of the IASB’s wish to enter the essence of balance-sheet items in terms of fair value. In particular, it demands that portfolios derived from cover mechanisms set up in the context of asset and liability management be entered into the accounts at market value (see Chapter 12), regardless of the accounting methods used in the entries that they cover. In the ﬁeld of ﬁnancial risk management, it should be realised that in addition to the impact on asset and liability management, these standards, once adopted, will doubtless affect the volatility of the results published by the ﬁnancial institutions as well as affecting equity capital ﬂuctuations.

2 Changes in Financial Risk Management 2.1 DEFINITIONS Within a ﬁnancial institution, the purpose of the risk management function is twofold. 1. It studies all the quantiﬁable and non-quantiﬁable factors (see 2.1.1 below) that in relation to each individual person or legal entity pose a threat to the return generated by rational use of assets and therefore to the assets themselves. 2. It provides the following solutions aimed at combating these factors. — Strategic. The onus is on the institution to propose a general policy for monitoring and combating risks, ensure sensible consolidation of risks at group management level where necessary, organise the reports sent to the management committee, participate actively in the asset and liability management committee (see Chapter 12) and so on. — Tactical. This level of responsibility covers economic and operational assessments when a new activity is planned, checks to ensure that credit has been spread safely across various sectors, the simulation of risk coverage for exchange interest rate risk and their impact on the ﬁnancial margin, and so on. — Operational. These are essentially ﬁrst-level checks that include monitoring of internal limits, compliance with investment and stop loss criteria, traders’ limits, etc. 2.1.1 Typology of risks The risks linked to ﬁnancial operations are classically divided into two major categories: 1. Ex ante non-quantiﬁable risks. 2. Ex ante quantiﬁable risks. 2.1.1.1 Standard typology It is impossible to overemphasise the importance of proactive management in the avoidance of non-quantiﬁable risks within ﬁnancial institutions, because: 1. Although these risks cannot be measured, they are, however, identiﬁable, manageable and avoidable. 2. The ﬁnancial consequences that they may produce are measurable, but a posteriori only. The many non-quantiﬁable risks include: 1. The legal risk (see 2.2.1.4), which is likely to lead to losses for a company that carries on ﬁnancial deals with a third-party institution not authorised to carry out deals of that type.

12

Asset and Risk Management

2. The media risk, when an event undermines conﬁdence in or the image of a given institution. 3. The operational risk (see 2.1.1.2 below), although recent events have tended to make this risk more quantiﬁable in nature. The quantiﬁable risks include: 1. The market risk, which is deﬁned as the impact that changes in market value variables may have on the position adopted by the institution. This risk is subdivided into: — interest rate risk; — FX risk; — price variation risk; — liquidity risk (see 2.1.1.4) 2. The credit risk that arises when an opposite party is unable or unwilling to fulﬁl his contractual obligations: — relative to the on-balance sheet (direct); — relative to the off-balance sheet (indirect); — relating to delivery (settlement risk). 2.1.1.2 Operational risk1 According to the Basle Committee, operational risk is deﬁned as the risk of direct or indirect loss resulting from inadequate or failed internal processes, people and systems or from external events. In the ﬁrst approach, it is difﬁcult to classify risks of this type as ones that could be quantiﬁed a priori, but there is a major change that makes the risk quantiﬁable a priori. In fact, the problems of corporate governance, cases of much-publicised internal checks that brought about the downfall of certain highly acclaimed institutions, the combination of regulatory pressure and market pressure have led the ﬁnancial community to see what it has been agreed to call operational risk management in a completely different light. Of course operational risk management is not a new practice, its ultimate aim being to manage the added volatility of the results as produced by the operational risk. The banks have always attached great importance to attempts at preventing fraud, maintaining integrity of internal controls, reducing errors and ensuring that tasks are appropriately segregated. Until recently, however, the banks counted almost exclusively on internal control mechanisms within operational entities, together with the internal audit,2 to manage their operational risks. This type of management, however, is now outdated. We have moved on from operational risk management fragmented into business lines to transfunctional integrity; the attitude is no longer reactive but proactive. We are looking towards the future instead of back to the past, and have turned from ‘cost avoidance’ to ‘creation of value’. 1

See also Point 1.2.2. Interested readers should consult the Bank for International Settlements, Basle Committee for Banking Controls, Internal Audit in Banks and the Supervisor’s Relationship with Auditors, Basle, August 2001. 2

Changes in Financial Risk Management

13

The operational risk management of today also includes: • Identifying and measuring operational risks. • Analysing potential losses and their causes, as well as ways of reducing and preventing losses. • Analysing risk transfer possibilities. • Allocating capital speciﬁcally to operational risk. It is speciﬁcally this aspect of measurement and quantiﬁcation that has brought about the transition from ex post to ex ante. In fact, methodological advances in this ﬁeld have been rapid and far-reaching, and consist essentially of two types of approach. • The qualitative approach. This is a process by which management identiﬁes the risks and controls in place in order to manage them, essentially by means of discussions and workshops. As a result, the measurement of frequency and impact is mostly subjective, but it also has the advantage of being prospective in nature, and thus allows risks that cannot be easily quantiﬁed to be understood. • The quantitative approach. A speciﬁc example, although not the only one, is the loss distribution approach, which is based on a database of past incidents treated statistically using a Value at Risk method. The principal strength of this method is that it allows the concept of correlation between risk categories to be integrated, but its prospective outlook is limited because it accepts the hypothesis of stationarity as true. Halfway between these two approaches is the scorecards method, based on risk indicators. In this approach, the institution determines an initial regulatory capital level for operational risk, at global level and/or in each trade line. Next, it modiﬁes this total as time passes, on the basis of so-called scorecards that attempt to take account of the underlying risk proﬁle and the risk control environment within the various trade lines. This method has several advantages: • It allows a speciﬁc risk proﬁle to be determined for each organisation. • The effect on behaviour is very strong, as managers in each individual entity can act on the risk indicators. • It allows the best practices to be identiﬁed and communicated within the organisation. It is, however, difﬁcult to calibrate the scorecards and allocate speciﬁc economic funds. A reﬁned quantiﬁcation of operational risk thus allows: • Its cost (expected losses) to be made clear. • Signiﬁcant exposures (unexpected losses) to be identiﬁed. • A framework to be produced for proﬁt-and-cost analysis (and excessive controls to be avoided). In addition, systematic analysis of the sources and causes of operational losses leads to: • Improvements in processes and quality. • Optimal distribution of best practices.

14

Asset and Risk Management

A calculation of the losses attributable to operational risk therefore provides a framework that allows the controls to be linked to performance measurement and shareholder value. That having been said, this approach to the mastery of operational risk must also allow insurance programmes to be rationalised (concept of risk transfer), in particular by integrating the business continuity plan or BCP into it. 2.1.1.3 The triptych: Operational risk – risk transfer – BCP See Figure 2.1. A. The origin, deﬁnition and objective of Business Continuity Planning A BCP is an organised set of provisions aimed at ensuring the survival of an organisation that has suffered a catastrophic event. The concept of BCP originated in the emergency computer recovery plans, which have now been extended to cover the human and material resources essential for ensuring continuity of a business’s activities. Because of this extension, activities that lead to the constitution of a BCP relate principally to everyone involved in a business and require coordination by all the departments concerned. In general, the BCP consists of a number of interdependent plans that cover three distinct ﬁelds. • The preventive plan: the full range of technical and organisational provisions applied on a permanent basis with the aim of ensuring that unforeseen events do not render critical functions and systems inoperative. • The emergency plan: the full range of provisions, prepared and organised in advance, required to be applied when an incident occurs in order to ensure continuity of critical systems and functions or to reduce the period of their non-availability. • The recovery plan: the full range of provisions, prepared and organised in advance, aimed at reducing the period of application of the emergency plan and re-establishing full service functionality as soon as possible.

Insurance

• Identification • Evaluation • Transfer • Prevention • A posteriori management

BCP

Figure 2.1 Triptych

Operational risk management

Changes in Financial Risk Management

15

B. The insurance context After the events of 11 September 2001, the thought processes and methods relating to the compilation of a BCP were reﬁned. Businesses were forced to realise that the issue of continuity needed to be overseen in its entirety (prevention, insurance, recovery plan and/or crisis management). The tensions prevailing in the insurance market today have only increased this awareness; reduction in capacity is pushing business towards a policy of self-insurance and, in consequence, towards the setting up of new processes believed to favour a more rapid recovery after the occurrence of a major incident. Several major actors in the market are currently reﬂecting on the role that they should play in this context, and some guidelines have already been laid down. The insurance and reinsurance companies thus have an essential communication role to play. They have a wealth of information that is unrivalled and clearly cannot be rivalled, on ‘prejudicial’ events and their causes, development, pattern and management. Sharing of insured persons’ experiences is a rich source of information for learning about processes, methods and errors so that clients may beneﬁt from them. Another source is training. The wealth of information available to them also allows insurers and reinsurers to provide well-informed advice based on a pragmatic approach to the problems encountered. In this context, the integration of BCP into the risk management function, provided that insurance management is also integrated, will bring the beneﬁts of shared information and allow better assessment of the practical opportunities for implementation Similarly, the undisputed links between certain insurance policies and the BCP also argue for integration, together with operational risk management, which must play an active role in the various analyses relating to the continuity plan. C. The connection between insurance and the BCP In order to illustrate our theme, here we examine three types of policy. • The ‘all risks’ policy, which guarantees the interests of the person taking out the insurance in all ﬁxed and movable assets owned or used by that person. The policy may include an extension of the ‘extra expenses’ cover. Such expenses correspond to the charges that the institution has to bear in order to function ‘normally’ following an incident (for example: hire of premises or equipment, additional working hours etc.). In this case, the insurance compensates for the full range of measures taken in the event of a BCP. • The ‘Business Interruption’ policy, which guarantees the institution against loss of income, interest and additional charges and business expenses arising from interruption of its activity in its premises following the occurrence of an insured event. The objective, in ﬁne, is to compensate losses that affect results following the occurrence of an incident covered by another guarantee (direct damage to property owned by the institution, for example). In this case also, the links are clear: the agreements concluded will compensate for the inevitable operating losses between the occurrence of the event and the resumption of activities as made possible more quickly by the BCP. • The ‘crisis management’ policy, which guarantees payment of consultants’ costs incurred by the institution in an effort to deal with its crisis situation, that is, to draw up plans of action and procedures to manage the crisis and ensure the communication and legal resources needed to contain it and minimise its initial effects. If an event

16

Asset and Risk Management

that satisﬁes the BCP implementation criteria occurs, this insurance policy will provide additional assistance in the effort to reduce the consequences of the crisis. In addition, this type of agreement usually sets out a series of events likely to lead to a ‘crisis situation’ (death of a key ﬁgure, government inquiry or investigation, violent incidents in the work place etc.). Bringing such a policy into parallel can thus provide an interesting tool for optimising developments in the BCP. D. The connection between operational risk and the BCP The starting hypothesis generally accepted for compiling a BCP takes account of the consequences, not the causes, of a catastrophic event. The causes, however, cannot be fully ignored and also need to be analysed to make the continuity plan as efﬁcient as possible. As operational risk is deﬁned as the risk of direct or indirect loss resulting from inadequate or failed internal processes, people and systems or from external events, there is a strong tendency for the measures provided for by the BCP to be designed following the occurrence of an operational risk. E. Speciﬁc expressions of the synergy The speciﬁc expression of the synergy described above can be: • Use of the BCP in the context of negotiations between the institution and the insurers. The premium payable and cover afforded under certain insurance policies (all risks and Business Interruption) may be directly inﬂuenced by the content of the institution’s BCP. Coordination of the said BCP within the risk management function thus favours orientation of the provisions in the direction ‘desired’ by the insurers and allows the strategies put in place to be optimised. • Once set up, the plan must be reﬁned as and when the operational risks are identiﬁed and evaluated, thus giving it added value. • Insurance policies can play a major ﬁnancial role in the application of the steps taken to minimise the effects of the crisis, and in the same order of ideas. • The possibility of providing ‘captive cover’ to deal with the expenses incurred in the application of the steps provided for in the BCP may also be of interest from the ﬁnancial viewpoint. 2.1.1.4 Liquidity risk:3 the case of a banking institution This type of risk arises when an institution is unable to cover itself in good time or at a price that it considers reasonable. A distinction is drawn between ongoing liquidity management, which is the role of the banking treasury, and liquidity crisis management. The Basle Committee asserts that these two aspects must be covered by the banking institutions’ asset and liability management committees. A crisis of liquidity can be reproduced in a simulation, using methods such as the maximum cash outﬂow, which allows the survival period to be determined. 3 Interested readers should consult the Bank for International Settlements, Basle Committee for Banking Controls, Sound Practices for Managing Liquidity in Banking Organisations, Basle, February 2000.

Changes in Financial Risk Management

17

A. Maximum Cash Outﬂow and Survival Period The ﬁrst stage consists of identifying the liquidity lines: 1. Is the institution a net borrower or net lender in the ﬁnancial markets, and does it have a strategic liquidity portfolio? 2. Can the bond and treasury bill portfolios be liquidated through repos and/or resales? 3. Can the ‘credit’ portfolios of the synthetic asset swap type be liquidated by the same means? And, last but not least: 4. What would be the potential level of assistance that may be expected from the reference shareholder or from other companies in the same group? An extreme liquidity crisis situation can then be simulated, on the premise that the institution cannot borrow on the markets and does not rely on assistance from its reference shareholder or other companies within the group. A number of working hypotheses can be taken as examples. On the crisis day (D) let us suppose that: • The institution has had no access to borrowing on the interbank market for ﬁve working days. • Both private and institutional clients have immediately withdrawn all their cash deposits within the legal framework: — All current accounts are repaid on D + 1. — All deposits with 24 and 48 hours’ notice are repaid on D + 1 and D + 2 respectively. — All savings accounts are repaid on D + 1. • The institution has to meet all its contractual obligations in terms of cash outﬂows: — The institution repays all the borrowings contracted out by it and maturing between D and D + 5. — The institution meets all the loans contracted out by it with start dates between D and D + 5. • The only course of action that the institution can take to obtain further liquidity is to sell its assets. — It is assumed, for example, that the treasury bill portfolio can be liquidated onequarter through repos on D + 1 and three-quarters by sales on D + 2. — It is assumed, for example, that the debenture and ﬂoating-rate note portfolios can be liquidated via repo or resale 85 % on D + 1, if the currency (GB£) allows, and by sale on D + 2. — It is assumed, for example, that the synthetic asset swap portfolio can be liquidated 30 % on D + 3, 30 % on D + 4 and the balance on D + 5 taking account of the ratings. The cash in and cash out movements are then simulated for each of the days being reviewed. As a result, the cash balance for each day will be positive or negative. The survival period is that for which the institution shows a positive cash balance. See Figure 2.2. In the following example it will be noted that in view of the hypothetical catastrophic situation adopted, the institution is nevertheless capable of facing a serious liquidity crisis for three consecutive dealing days without resorting to external borrowing.

18

Asset and Risk Management

Liquidity millions of US$

1000 800 600 400 200 0 –200 –400 –600 –800

0

1

2 3 Number of days

4

5

Figure 2.2 Survival period

It should, however, be noted that recourse to repos in particular will be much more effective if the ﬁnancial institution optimises its collateral management. We now intend to address this point. B. Collateral management4 Collateral management is one of the three techniques most commonly used in ﬁnancial markets in order to manage credit risks, and most notably counterparty risk. The main reason for the success of collateral management is that the transaction-related costs are limited (because the collateral agreement contracts are heavily standardised). The three ﬁelds in which collateral management is encountered are: 1. The repos market. 2. The OTC derivatives market (especially if the institution has no rating). 3. Payment and settlement systems. The assets used as collateral are: • Cash (which will be avoided as it inﬂates the balance sheet, to say nothing of the operational risks associated with transfers and the risk of depositor bankruptcy). • Government bonds (although the stocks are becoming weaker). • The effects of major indices (because these are liquid, as their capitalisation classiﬁes them as such indices). • Bonds issued by the private sector (although particular attention will be paid to rating here). Generally speaking, the counterparty receiving the collateral is clearly less exposed in terms of counterparty risk. There is, however, a credit risk on the collateral itself: the issuer risk (inherent in the bill) and the liquidity risk (associated with the bill). The risks linked to the collateral must be ‘monitored’, as both the product price variation 4 Interested readers should consult the Bank for International Settlements, BIS Quarterly Review, Collateral in Wholesale Financial Markets, Basle, September 2001, pp. 57–64. Also: Bank for International Settlements, Committee on the Global Financial System, Collateral in Wholesale Financial Markets: Recent Trends, Risk Management and Market Dynamics, Basle, March 2001.

Changes in Financial Risk Management

19

that necessitates the collateral and the collateral price variation have an effect on the coverage of the potential loss on the counterparty and the collateral that the counterparty will have provided. Collateral management is further complicated by the difﬁculty in estimating the correlation between collateral price ﬂuctuations and the ‘collateralised’ derivative. A negative correlation will signiﬁcantly increase the credit risk, as when the value of the collateral falls, the credit risk increases. The question of adjustment is of ﬁrst importance. Too much sophistication could lead to the risk of hesitation by the trader over whether to enter into ‘collateralised’ deals. Conversely, too little sophistication risks a shift from counterparty risk to issuer and liquidity risk, and what is the good of that? Collateral management improves the efﬁciency of the ﬁnancial markets; it makes access to the market easier. If it is used, more participants will make the competition keener; prices will be reduced and liquidity will increase. Cases of adverse effects have, however, been noted, especially in times of stress. The future of collateral management is rosy: the keener the competition in the ﬁnance markets, the tighter the prices and the greater the need for those involved to run additional risks. 2.1.2 Risk management methodology While quantiﬁable risks, especially market risks, can of course be measured, a good understanding of the risk in question will depend on the accuracy, frequency and interpretation of such measurement. 2.1.2.1 Value of one basis point (VBP) The VBP quantiﬁes the sensitivity of a portfolio to a parallel and unilateral upward or downward movement of the interest rate curve for a resolution of one one-hundredth per cent (or a basis point). See Figure 2.3. This simple method quantiﬁes the sensitivity of an asset or portfolio of assets to interest rates, in units of national currency; but it must be noted that the probability of a parallel ﬂuctuation in the curve is low, and that the method does not take account of any curvature or indeed any alteration in the gradient of the curve.

Rate

6.01 % 6.00 % 5.99 %

Current VBP Maturity dates

Figure 2.3 VBP

20

Asset and Risk Management

Finally, it should be noted that the measurement is immediate and the probability of occurrence is not grasped. 2.1.2.2 Scenarios and stress testing Scenarios and stress testing allow the rates to be altered at more than one point in the curve, upwards for one or more maturity dates and downwards for one or more maturity dates at the same time. See Figure 2.4. Rate

Current Stress testing Maturity dates

Figure 2.4 Stress testing

This method is used for simulating and constructing catastrophe scenarios (a forecast of what, it is assumed, will never happen). More reﬁned than the VBP, this method is more difﬁcult to implement but the time and probability aspects are not involved. 2.1.2.3 Value at risk (VaR) Regardless of the forecasting technique adopted, the VaR is a number that represents the maximum estimated loss for a portfolio that may be multi-currency and multi-product (expressed in units of national currency) due to market risks for a speciﬁc time horizon (such as the next 24 hours), with a given probability of occurrence (for example, ﬁve chances in 100 that the actual loss will exceed the VaR). See Figure 2.5. In the case of the VaR, as Figure 2.5 shows, we determine the movement of the curve that with a certain chance of occurrence (for example, 95 %) for a given time horizon (for example, the next 24 hours) will produce the least favourable ﬂuctuation in value for the portfolio in question, this ﬂuctuation being of course estimated. In other words, the Rate

Current VaR Maturity dates

Figure 2.5 VaR

Changes in Financial Risk Management

21

Table 2.1 VBP, stress testing and VaR VBP

Stress testing

Indication

‘Uniform’ sensitivity

‘Multi-way’ sensitivity

Time Probability Advantages

Immediate No Simple

Disadvantages

Not greatly reﬁned

Immediate No More realistic curve movement Probability of scenario occurring?

VaR Maximum estimated loss Time horizon Yes Standard and complete Methodological choice and hypotheses

actual loss observed must not exceed the VaR in more than 5 % of cases (in our example); otherwise, the VaR will be a poor estimation of the maximum loss. This method, which we explore in detail in Chapter 6, is complementary in comparison with VBP and stress testing. In other words, none of these methods should be judged sufﬁcient in itself, but the full range of methods should produce a sufﬁciently strong and reliable risk matrix. As can be seen from the comparison in Table 2.1, the VaR represents a priori the most comprehensive method for measuring market risk. However, methodological choices must be made and well-thought-out hypotheses must be applied in order to produce a realistic VaR value easily. If this is done, VaR can then be considered as the standard market for assessing risks inherent in market operations.

2.2 CHANGES IN FINANCIAL RISK MANAGEMENT 2.2.1 Towards an integrated risk management As Figure 2.6 shows, the risk management function is multidisciplinary, the common denominator being the risk vector. From this, an ‘octopus’ pattern is evident; there is only one step, but. . . 2.2.1.1 Scope of competence The risk management function must operate within a clearly deﬁned scope of competence, which will often be affected by the core business of the institution in question. Although it is generally agreed that the job of monitoring the market risk falls to risk management, for example, what happens to the risk of reputation, the legal risk (see 2.2.1.4), and the strategic risk? And let us not forget the operational risk: although the Basle Committee (see Chapter 1) explicitly excludes it from the ﬁeld of skills of internal audit and includes it in the ﬁeld of skills of risk management, it must be noted that a signiﬁcant number of institutions have not yet taken that step. Naturally, this leads to another problem. The controlling aspect of a risk management function is difﬁcult to deﬁne, as one is very often limited to certain back-ofﬁce control checks and there is also a tendency to confuse the type of tasks assigned to internal audit with those proper to risk management.

22

Asset and Risk Management Insu

ranc

e Property, Causality, liability Risk management Multi-line Multi-risk Insurance products

Financia

l Capital markets / Treasury risk Market risk, Liquidity risk Analytics & modelling

Strategic

Operatio

Credit analytics

Strategic risk

nal Engineering

COSO operations compliance

Quality

Control Selfassessment

Strategic, business, process & cultural Risk management

Integrated risk management

COSO financial

ess

Proc

Financial internal control

Figure 2.6 Integrated Risk Management Source: Deloitte & Touche

2.2.1.2 Back ofﬁce vs. risk management With regard to the back ofﬁce vs. risk management debate, it is well worth remembering that depending on the views of the regulator, the back ofﬁce generally deals with the administration of operations and as such must, like every other function in the institution, carry out a number of control checks. There are two types of back ofﬁce control check: • The daily control checks carried out by staff, for example each employee’s monitoring of their suspense account. • The ongoing continuous checks, such as monitoring of the accuracy and comprehensiveness of data communicated by persons responsible for business and operational functions in order to oversee the operations administratively. However, when mentioning checks to be made by risk management, one refers to exception process checks in accordance with the bank’s risk management policy, for example: • Monitoring any limit breaches (limits, stop losses etc.). • Monitoring (reconciliation of) any differences between positions (or results) taken (calculated) within various entities (front, back, accounting etc.). 2.2.1.3 Internal audit vs. risk management The role of audit in a ﬁnancial group is based on four main aspects: • Producing a coherent plan for the audit activities within the group.

Changes in Financial Risk Management

23

• Ensuring that the whole of the auditable activities, including the group’s subsidiaries and holding company within the responsibilities of the parent company, are covered through the conduct or review of audits. • Applying a uniform audit method across all group entities. • On the basis of a homogeneous style of reporting, providing the directors of the parent company and of the subsidiaries maximum visibility on the quality of their internal control systems. Although risk management by its very nature is also involved with the efﬁciency of the internal control system, it must be remembered that this function is a tool designed to help the management of the institution in its decision making. Risk management is therefore part of the auditable domain of the institution. We saw the various responsibilities of risk management in Section 2.1. 2.2.1.4 Position of legal risk In practice, every banking transaction is covered by a contract (spoken or written) that contains a certain degree of legal risk. This risk is more pronounced in transactions involving complex securities such as derivative products or security lending. From the regulator’s point of view, legal risk is the risk of contracts not being legally enforceable. Legal risk must be limited and managed through policies developed by the institution, and a procedure must be put in place for guaranteeing that the parties’ agreements will be honoured. Before entering into transactions related to derivatives, the bank must ensure that its counterparties have the legal authority to enter into these deals themselves. In addition, the bank must verify that the conditions of any contract governing its activities in relation to the counterpart are legally sound. The legal risk linked to stock-market deals can in essence be subdivided into four types of subrisk. 1. Product risk, which arises from the nature of the deal without taking into account the counterparty involved; for example, failure to evaluate the legal risk when new products are introduced or existing products are changed. 2. Counterparty risk. Here the main risk is that the counterparty does not have the legal capacity to embark on the deal in question. For example, the counterparty may not have the capacity to trade in derivative products or the regulatory authority for speciﬁc transactions, or indeed may not even have the authority to conclude a repo contract. 3. Transaction risk. This is certainly the most signiﬁcant part of the legal risk and covers actions undertaken in the conclusion of operations (namely, transaction and documentation). When the deal is negotiated and entered into, problems may arise in connection with regulatory or general legal requirements. For example: closing a spoken agreement without listing the risks involved beforehand, compiling legal documentation or contracts without involving the legal department, negotiating derivative product deals without involving the legal department or without the legal department reviewing the signed ISDA Schedules, signing Master Agreements with foreign counterparties without obtaining an outside legal opinion as to the validity of default, and ﬁnally documentary errors such as inappropriate signatures, failure to sign the document or

24

Asset and Risk Management

failure to set up procedures aimed at ensuring that all contractual documentation sent to counterparties is returned to the institution duly signed. 4. Process risk. In the event of litigation in connection with a deal or any other consequence thereof, it will be necessary to undertake certain action to ensure that the ﬁnancial consequences are minimised (protection of proof, coordination of litigation etc.). Unfortunately, this aspect is all too often missing: records and proof of transaction are often insufﬁcient (failure to record telephone conversations, destruction of emails etc.). These four categories of risk are correlated. Fundamentally, the legal risk can arise at any stage in the deal (pre-contractual operations, negotiation, conclusion and post-contractual procedures). In this context of risk, the position of the legal risk connected with the ﬁnancial deals within the risk management function presents certain advantages: • Assessment of the way in which the legal risk will be managed and reduced. • The function has a central position that gives an overall view of all the bank’s activities. • Increased efﬁciency in the implementation of legal risk management procedures in ﬁnancial transactions, and involvement in all analytical aspects of the legal risk on the capital market. 2.2.1.5 Integration It is worrying to note the abundance of energy being channelled into the so-called problem of the ‘fully integrated computerised risk-management system’. One and the same system for market risks, credit risks and operational risks? Not possible! The interesting problem with which we are confronted here is that of integrating systems for monitoring different types of risk. We have to ask ourselves questions on the real added value of getting everything communicated without including the unmentionable – the poorly secured accessories such as spreadsheets and other non-secured relational databases. Before getting involved with systems and expensive balance sheets relating to the importance of developing ‘black boxes’ we think it wiser to ask a few questions on the cultural integration of risk management within a business. The regulator has clearly understood that the real risk management debate in the next 10 years will be on a qualitative, not a quantitative, level. Before moving onto the quantitative models proposed by Basle II, should we not ﬁrst of all pay attention to a series of qualitative criteria by organising ourselves around them? Surely the ﬁgures produced by advanced operational risk methods are of a behavioural nature in that they show us the ‘score to beat’. To sum up, is it better to be well organised with professional human resources who are aware of the risk culture, or to pride ourselves on being the owners of the Rolls Royce of the Value at Risk calculation vehicles? When one remembers that Moody’s5 is attaching ever-increasing importance to the evaluation of operational risk as a criterion for awarding its ratings, and the impact of these ratings on ﬁnance costs, is it not worth the trouble of achieving compliance from 5

Moody’s, Moody’s Analytical framework for Operational Risk Management of Banks, Moody’s, January 2003.

Changes in Financial Risk Management

25

the qualitative viewpoint (notwithstanding the savings made on capital through bringing the equity fund into line)? A risk management function should ideally: • Report directly to executive management. • Be independent of the front and back ofﬁce functions. • Be located at a sufﬁciently senior hierarchical level to guarantee real independence, having the authority and credibility it needs to fulﬁl its function, both internally (especially vis-`a-vis the front and back ofﬁces) and externally (vis-`a-vis the regulator, external audit and the ﬁnancial community in general). • Be a member of the asset and liability management committee. • Where necessary, oversee all the decentralised risk-management entities in the subsidiaries. • Have as its main task the proposal of an institution-wide policy for monitoring risks and ensuring that the decisions taken by the competent bodies are properly applied, relying on the methodologies, tools and systems that it is responsible for managing. • Have a clearly deﬁned scope of competence, which must not be limited to market and credit risks but extend to operational risks (including insurance and BCP), the concentration risk and the risks linked to asset management activity in particular. • Play a threefold role in the ﬁeld of risks: advice, prevention and control. But at what price? 2.2.2 The ‘cost’ of risk management A number of businesses believed that they could make substantial savings by spending a bare minimum on the risk management function. It is this serious lack of foresight, however, that has led to collapse and bankruptcy in many respectable institutions. The commonest faults are: 1. One person wearing two ‘hats’ for the front and back ofﬁce, a situation that is, to say the least, conducive to fraud. 2. Non-existence of a risk management function. 3. Inability of management or persons delegated by management to understand the activities of the market and the products used therein. 4. Lack of regular and detailed reporting. 5. Lack of awareness of employees at all levels, of quantiﬁable and/or non-quantiﬁable risks likely to be generated, albeit unwittingly, by those employees. 6. Incompatibility of volumes and products processed both with the business and with back-ofﬁce and accounting procedures. At present, market and regulatory pressure is such that it is unthinkable for a respectable ﬁnancial institution not to have a risk management function. Instead of complaining about its cost, however, it is better to make it into a direct and indirect proﬁt centre for the institution, and concentrate on its added value. We have seen that a well-thought-out risk management limits: • Excessive control (large-scale savings, prevention of doubling-up).

26

Asset and Risk Management

• Indirect costs (every risk avoided is a potential loss avoided and therefore money gained). • Direct costs (the capital needed to be exposed to the threefold surface of market, credit and operational risk is reduced). The promotion of a real risk culture increases the stability and quality of proﬁts, and therefore improves the competitive quality of the institution and ensures that it will last.

2.3 A NEW RISK-RETURN WORLD 2.3.1 Towards a minimisation of risk for an anticipated return Assessing the risk from the investor’s point of view produces a paradox: • On one hand, taking the risk is the only way of making the money. In other terms, the investor is looking for the risk premium that corresponds to his degree of aversion to risk. • On the other hand, however, although accepting the ‘risk premium’ represents proﬁt ﬁrst and foremost, it also unfortunately represents potential loss. We believe that we are now moving from an era in which investors continually looked to maximise return for a given level of risk (or without thinking about risk at all), into a new era in which the investor, for an anticipated level of return, will not rest until the attendant risk has been minimised. We believe that this attitude will prevail for two different reasons: 1. Institutions that offer ﬁnancial services, especially banks, know the levels of return that their shareholders demand. For these levels of return, their attitude will be that they must ﬁnd the route that allows them to achieve their objective by taking the smallest possible risk. 2. The individual persons and legal entities that make up the clientele of these institutions, faced with an economic future that is less certain, will look for a level of return that at least allows them to preserve their buying and investing power. This level is therefore known, and they will naturally choose the ﬁnancial solution that presents the lowest level of risk for that level.

2.3.2 Theoretical formalisation As will be explained in detail in Section 3.1.16 in the section on equities, the return R is a random factor for which the probability distribution is described partly by two parameters: a location index, the expected value of which is termed E(R), and a dispersion index, the variance which is noted var(R). The ﬁrst quantity corresponds to the expected return. 6 Readers are referred to this section and to Appendix 2 for the elements of probability theory needed to understand the considerations that follow.

Changes in Financial Risk Management

27

E(R)

P

var(R)

Figure 2.7 Selecting a portfolio

E(R)

P

E

Q

var(R)

Figure 2.8 Selecting a portfolio

√ The square root of the second, σ (R) = var(R), is the standard deviation, which is a measurement of risk. A portfolio, like any isolated security, will therefore be represented by a mean-variance couple. This couple depends on the expected return level and variance on return for the various assets in the portfolio, but also on the correlations between those assets. A portfolio will be ‘ideal’ for an investor (that is, efﬁcient), if, for a given expected return, it has a minimal variance or if, for a ﬁxed variance, it has a maximum expected return. All the portfolios thus deﬁned make up what is termed the efﬁcient frontier, which can be represented graphically in the Figure 2.7. In addition, in the same plane the indifference curves represent the portfolios with an equivalent mean-variance combination in the investor’s eyes (that is, they have for him the same level of utility7 ). The selection is therefore made theoretically by choosing the portfolio P from the efﬁciency frontier located on the indifference curve located furthest away (that is, with the highest level of utility), as shown in Figure 2.7. In a situation in which an investor no longer acts on the basis of a classic utility structure, but instead wishes for a given return E and then tries to minimise the variance, the indifference curves will be cut off at the ordinate E and the portfolio selected will be Q, which clearly presents a lower expected return than that of P but also carries a lower risk that P . See Figure 2.8. 7

Readers are referred to Section 3.2.7.

Part II Evaluating Financial Assets

Introduction 3 Equities 4 Bonds 5 Options

30

Asset and Risk Management

Introduction Two fundamental elements Evaluation of ﬁnancial assets should take account of two fundamental aspects – chance and time. The random aspect It is obvious that the changes in value of a ﬁnancial asset cannot be predicted in a deterministic manner purely by looking at what happened in the past. It is quite clear that for equities, whose rates ﬂuctuate according to the law of supply and demand, these rates are themselves dictated by the perception that market participants have of the value of the business in question. The same applies to products that are sometimes deﬁned as ‘risk-free’, such as bonds; here, for example, there is the risk of bankruptcy, the risk of possible change and the risk posed by changes in interest rates. For this reason, ﬁnancial assets can only be evaluated in a random context and the models that we will be putting together cannot work without the tool of probability (see Appendix 2 for the essential rules). The temporal aspect Some ﬁnancial asset valuation models are termed monoperiodic, such as Markowitz’s portfolio theory. These models examine the ‘photograph’ of a situation at a given moment and use historical observations to analyse that situation. On the other hand, there may be a wish to take account of development over time, with the possible decision for any moment according to the information available at that moment. The random variables mentioned in the previous paragraph then turn into stochastic processes and the associated theories become much more complex. For this reason, the following chapters (3, 4 and 5) will feature both valuation models (from the static viewpoint) and development models (from the dynamic viewpoint). In addition, for the valuation of options only, the development models for the underlying asset are essential because of the intrinsic link between this product and the time variable. The dynamic models can be further divided into discrete models (where development is observed at a number of points spaced out over time) and continuous models (where the time variable takes its values within a continuous range such as an interval). The mathematical tools used for this second model are considerably more complex. Two basic principles The evaluation (or development) models, like all models, are based on a certain number of hypotheses. Some of these are purely technical and have the aim of guaranteeing the meaning of the mathematical expressions that represent them; they vary considerably according to the model used (static or dynamic, discrete or continuous) and may take the form of integrability conditions, restrictions on probability laws, stochastic processes, and so on. Other hypotheses are dictated by economic reality and the behaviour of investors,1 and we will be covering the two economic principles generally accepted in ﬁnancial models here. 1

We will be touching on this last aspect in Section 3.2.6

Evaluating Financial Assets

31

The perfect market Often, a hypothesis that is so simplistic as to be unrealistic – that of the perfect market – will be put forward. Despite its reductive nature, it deﬁnes a context in which ﬁnancial assets can be modelled and many studies have been conducted with the aim of weakening the various elements in this hypothesis. The perfect market2 is a market governed by the law of supply and demand, on which: • Information is available in equal measure to all investors. • There are no transactional or issue costs associated with the ﬁnancial assets. • There is no tax deduction on the income produced by the ﬁnancial assets (where increases in value or dividends are involved, for example). • Short sales are authorised without restriction. Absence of arbitrage opportunity An arbitrage opportunity is a portfolio deﬁned in a context in which: • No ﬁnancial movement occurs within the portfolio during the period in question. • The risk-free interest rate does not alter during the period in question and is valid for any maturity date (a ﬂat, constant rate curve). This is a portfolio with an initial value (value at the point of constitution) that is negative but presents a certain positive value at a subsequent time. More speciﬁcally, if the value of the portfolio at the moment t is termed Vt , we are looking at a portfolio for which: V0 < 0 and VT ≥ 0 or V0 ≤ 0 and VT > 0. Generally speaking, the absence of arbitrage opportunity hypothesis is constructed in the ﬁnancial modelling process. In fact, if it is possible to construct such portfolios, there will be considerable interest in putting together a large number of them. However, the numerous market operations (purchases/sales) that this process would require would lead, through the effect of supply and demand, to alterations to the prices of the various portfolio components until the proﬁts obtained through the position of arbitrage would all be lost. Under this hypothesis, it can therefore be said that for a portfolio of value V put together at moment 0, if VT = 0, no ﬁnancial movement occurs in that portfolio between 0 and T and the interest rate does not vary during that period and is valid for any maturity date (ﬂat, constant rate curve), then Vt = 0 for any t ∈ [0; T ]. This hypothesis of absence of arbitrage can be expressed as follows: in the context mentioned above, a portfolio which has been put together so as not to contain any random element will always present a return equal to the risk-free rate of interest. The concept of ‘valuation model’ A valuation model for a ﬁnancial asset is a relation that expresses quite generally the price p (or the return) for the asset according to a number of explanatory variables3 2

See for example Miller and Modigliani, Dividend policy, growth and the valuation of shares, Journal of Business, 1961. In these circumstances it is basically the risk of the security that is covered by the study; these explanatory variables are known as risk factors. 3

32

Asset and Risk Management

X1 , X2 , . . . , Xn that represent the element(s) of the market likely to affect the price: p = f (X1 , X2 , . . . , Xn ) + ε. The residual ε corresponds to the difference between reality (the effective price p) and the valuation model (the function f ). Where the price valuation model is a linear model (as for equities), the risk factors combine together to give, through the Central Limit Theorem, a distribution for the variable p that is normal (at least in the ﬁrst approximation), and is therefore deﬁned by the two mean-variance parameters only. On the other hand, for some types of assets such as options, the valuation model ceases to be linear. The previous reasoning is no longer valid and neither are its conclusions. We should state that alongside the risk factors that we will be mentioning, the explanatory elements of the market risk can also include: • The imperfect nature of valuation models. • The imperfect knowledge of the rules and limitations particular to the institution. • The impossibility of anticipating changes to legal regulations. We should also point out that alongside this market risk, the investor will be confronted with other types of risk that correspond to the occurrence of exceptional events such as wars, oil crises etc. This group of risks cannot of course be evaluated using techniques designed for the risk market. The technique presented here will not therefore be including these ‘event-based’ risks. However, this does not mean that the careful risk manager should not include ‘catastrophe scenarios’, in order to take account of the exceptional risks, alongside the methods designed to deal with the market risks. In this section we will be covering a number of general principles relative to valuation models, and mentioning one or another speciﬁc model4 that will be analysed in further detail in this second part. Linear models We will look ﬁrst at the simple case in which the function f of the valuation model is linear, or more speciﬁcally, the case in which the price variation p = pt − p0 is a ﬁrst-degree function of the variations X1 , . . . , Xn of the various explanatory variables and of that (ε) of the residue: p = a0 + a1 X1 + . . . + an Xn + ε. An example of the linear valuation model is the Sharpe simple index model used for equities (see Section 3.2.4). This model suggests that the variation5 in price of an equity is a ﬁrst-degree function of the variation in a general index of the market (of course, the coefﬁcients of this ﬁrst-degree function vary from one security to another: p = α + βI + ε. In practice, the coefﬁcients α and β are evaluated using a regression technique.6 4 Brearley R. A. and Myers S. C., Principles of Corporate Finance, McGraw-Hill, 1991. Broquet C., Cobbaut R., Gillet R. and Vandenberg A., Gestion de Portefeuille, De Boeck, 1997. Copeland T. E. and Weston J. F., Financial Theory and Corporate Policy, Addison-Wesley, 1988. ´ Devolder P., Finance Stochastique, Editions de l’ULB, 1993. ´ Roger P., L’Evalation Des Actifs Financiers, De Boeck, 1996. 5 This is a relative variation in price, namely a return. The same applies to the index. 6 Appendix 3 contains the statistical base elements needed to understand this concept.

Evaluating Financial Assets

33

Nonlinear models independent of time A more complex case is that in which the function f of the relation p = f (X1 , X2 , . . . , Xn ) + ε is not linear. When time is not taken into consideration, p is evaluated using a Taylor development, as follows: p =

n k=1

fX k (X1 , . . . , Xn )Xk +

n

n

1 f (X1 , . . . , Xn )Xk Xl + . . . + ε 2! k=1 l=1 Xk Xl

For as long as the Xk variations in the explanatory variables are low, the terms of the second order and above can be disregarded and it is possible to write: p ≈

n k=1

fX k (X1 , . . . , Xn )Xk + ε

This brings us back to a linear model, which will then be processed as in the previous paragraph. For example, for bonds, when the price of the security is expressed according to the interest rate, we are looking at a nonlinear model. If one is content to approximate using only the duration parameter (see Section 4.2.2), a linear approximation will be used. If, however one wishes to introduce the concept of convexity (see Section 4.2.3), the Taylor development used shall take account of the second-degree term. Nonlinear models dependent on time For some types of asset, duration is of fundamental importance and time is one of the arguments of the function f . This is the case, for example, with conditional assets; here, the life span of the contract is an essential element. In this case, there is a need to construct speciﬁc models that take account of this additional ingredient. We no longer have a stationary random model, such as Sharpe’s example, but a model that combines the random and temporal elements; this is known as a stochastic process. An example of this type of model is the Black–Scholes model for equity options (see Section 5.3.2), where the price p is a function of various variables (price of underlying asset, realisation price, maturity, volatility of underlying asset, risk-free interest rate). In this model, the price of the underlying asset is itself modelled by a stochastic process (standard Brownian motion).

3 Equities 3.1 THE BASICS An equity is a ﬁnancial asset that corresponds to part of the ownership of a company, its value being indicative of the health of the company in question. It may be the subject of a sale and purchase, either by private agreement or on an organised market. The law of supply and demand on this market determines the price of the equity. The equity can also give rise to the periodic payment of dividends. 3.1.1 Return and risk 3.1.1.1 Return on an equity Let us consider an equity over a period of time [t − 1; t] the duration of which may be one day, one week, one month or one year. The value of this equity at the end of the period, and the dividend paid during the said period, are random variables1 referred to respectively as Ct and Dt . The return on the equity during the period in question is deﬁned as: Rt =

Ct − Ct−1 + Dt Ct−1

We are therefore looking at a value without dimension, which can easily be broken down into the total of two terms: Ct − Ct−1 Dt Rt = + Ct−1 Ct−1 • The ﬁrst of these is the increase in value, which is ﬁctitious in that the holder of the equity does not proﬁt from it unless the equity is sold at the moment t. • The second is the rate of return, which is real as it represents an income. If one wishes to take account of the rate of inﬂation when deﬁning the return parameter, the nominal return Rt(n) (excluding inﬂation), the real return Rt(r) (with inﬂation) and the rate of inﬂation τ are all introduced. They are linked by the relation 1 + Rt(n) = (1 + Rt(r) ) · (1 + τ ). The real return can then be easily calculated: Rt(r) = 1

1 + Rt(n) −1 1+τ

Appendix 2 contains the basic elements of probability theory needed to understand these concepts.

36

Asset and Risk Management

Example An equity is quoted at 1000 at the end of May and 1050 at the end of June; it paid a dividend of 80 on 12 June. Its (monthly) return for this period is therefore: Rj une =

1050 − 1000 + 80 = 0.13 = 13 % 1000

This consists of an increase in value of 5 % and a rate of return of 8 %. We are looking here at the nominal return. If the annual rate of inﬂation for that year is 5 %, the real return will be: Rj(r) une =

1.13 − 1 = 0.1254 = 12.54 % (1.05)1/12

For certain operations carried out during the return calculation period, such as division or merging of equities, free issue or increase in capital, the principle of deﬁnition of return is retained, but care is taken to include comparable values only in the formula. Therefore, when an equity is split into X new equities, the return will be determined by: Rt =

X · Ct − Ct−1 + Dt Ct−1

or

X · Ct − Ct−1 + X · Dt Ct−1

This will depend on whether the dividends are paid before or after the date of the split. If a return is estimated on the basis of several returns relating to the same duration but for different periods (for example, ‘average’ monthly return estimated on the basis of 12 monthly returns for the year in question), then mathematical common sense dictates that the following logic should be applied: 1 + R1 year = (1 + R1 ) · (1 + R2 ) · . . . · (1 + R12 ) Therefore: R1 month =

(1 + R1 ) · . . . · (1 + R12 ) − 1

12

The expression (1 + R1 month ) is the geometric mean of the corresponding expressions for the different months. We therefore arrive at, and generally use in practice, the arithmetic mean. R1 + . . . + R12 R1 month = 12 This last relation is not in fact correct, as is shown by the example of a security quoted at 1000, 1100 and 1000 at moments 0, 1 and 2, respectively. The average return on this security is obviously zero. The returns on the two subperiods total 10 % and −9.09 %, respectively, which gives the following values for the average return: 0 % for the geometric mean and 0.45 % for the arithmetical mean. Generally speaking, the arithmetic mean always overestimates the return, all the more so if ﬂuctuations in partial returns are signiﬁcant. We are, however, more inclined to use

Equities

37

the arithmetic mean because of its simplicity2 and because this type of mean is generally used for statistical estimations,3 and it would be difﬁcult to work with variances and covariances (see below) estimated in any other way. Note We also use another calculation formula when no dividend is paid – that of the logarithmic return. Ct ∗ Rt = ln . Ct−1 This formula differs only slightly from the formula shown above, as it can be developed using the Taylor formula as follows, if the second-degree and higher terms, which are almost always negligible, are not taken into consideration: Rt∗

Ct − Ct−1 = ln 1 + Ct−1

= ln(1 + Rt ) ≈ Rt The advantage of Rt∗ compared to Rt is that: • Only Rt∗ can take values as small as one wishes: if Ct−1 > 0, we have: lim ln

Ct −→0+

Ct Ct−1

= −∞

Ct − Ct−1 ≥ −1 Ct−1 ∗ • Rt allows the variation to be calculated simply over several consecutive periods:

which is compatible with statistical assumption about return, though

Ct ln Ct−2

Ct−1 Ct · = ln Ct−1 Ct−2

Ct = ln Ct−1

Ct−1 + ln Ct−2

which is not possible with Rt . We will, however, be using Rt in our subsequent reasoning. Example Let us calculate in Table 3.1 the quantities Rt and Rt∗ for a few values of Ct . The differences observed are small, and in addition, we have: 11 100 ln = 0.0039 + 0.0271 − 0.0794 − 0.0907 = −0.1391 12 750 2 3

An argument that no longer makes sense with the advent of the computer age. See, for example, the portfolio return shown below.

38

Asset and Risk Management Table 3.1

Classic and logarithmic returns Rt

Rt∗

0.0039 0.0273 −0.0760 −0.0864

0.0039 0.0271 −0.0794 −0.0907

Ct 12 750 12 800 13 150 12 150 11 100

3.1.1.2 Return on a portfolio Let us consider a portfolio consisting of a number N of equities, and note nj , Cj t and Rj t , respectively the number of equities (j ), the price for those equities at the end of period t and the dividend paid on the equity during that period. The total value Vt of the portfolio at the moment t, and the total value Dt of the dividends paid during period t, are therefore given by: Vt =

N

nj Cj t

j =1

Dt =

N

nj Dj t

j =1

The return of the portfolio will therefore be given by: RP ,t =

Vt − Vt−1 + Dt Vt−1 N

=

nj Cj t −

j =1

N

nj Cj,t−1 +

j =1 N

N

nj Dj t

j =1

nk Ck,t−1

k=1 N

=

nj (Cj t − Cj,t−1 + Dj t )

j =1 N

nk Ck,t−1

k=1

=

N j =1

nj Cj,t−1 N

Rj t

nk Ck,t−1

k=1

nj Cj,t−1 The quantity Xj = N represents the portion of the equity (j ) invested in the k=1 nk Ck,t−1 portfolio at the moment t − 1, expressed in terms of equity market capitalisation, and one

Equities

39

thus arrives at Xj = 1. With this notation, the return on the portfolio takes the following form: N RP ,t = Xj Rj t j =1

Note The relations set out above assume, of course, that the number of each of the securities in the portfolio remains unchanged during the period in question. Even if this condition is satisﬁed, the proportions Xj will be dependent on t through the prices. If therefore one wishes to consider a portfolio that has identical proportions at two given different moments, the nj must be altered in consequence. This is very difﬁcult to imagine in practice, because of transaction costs and other factors, and we will not take account of it in future. Instead, our reasoning shall be followed as though the proportions remained unchanged. As for an isolated security, when one considers a return estimated on the basis of several returns relating to the same duration but from different periods, one uses the arithmetical mean instead of the geometric mean, which gives: =

1 RP ,t 12 t=1

=

1 Xj Rj t 12 t=1 j =1

12

RP ,1

month

N

12

=

N

Xj

j =1

1 Rj t 12 t=1 12

Therefore, according to what was stated above:4 RP ,1

month

=

N

Xj Rj,1 month .

j =1

3.1.1.3 Market return From a theoretical point of view, the market can be considered as a portfolio consisting of Nall the securities in circulation. The market return is therefore deﬁned as: RM,t = j =1 Xj Rj t where Xj represents the ratio of global equity market capitalisation of the security (j ) and that of all securities. These ﬁgures are often difﬁcult to process, and in practice, the concept is usually replaced by the concept of a stock exchange index that represents the market in question: It − It−1 RI,t = . It−1 4

Note that this relationship could not have existed if the arithmetical mean was not used.

40

Asset and Risk Management

A statistical index is a parameter that allows a magnitude X between the basic period X(s) . t and the calculation period s to be described as: It (s) = X(t) When X is composite, as for the value of a stock exchange market, several methods of evaluation can be envisaged. It is enough to say that: • • • •

Some relate to prices and others to returns. Some use arithmetic means for prices, others use equity market capitalisation. Some take account of dividends paid, others do not. Others relate to all quoted securities, others are sectorial in nature.

The best known stock exchanges indexes are the Dow Jones (USA), the S&P 500 (USA), the Nikkei (Japan) and the Eurostoxx 50 (Europe). 3.1.1.4 Expected return and ergodic estimator As we indicated above, the return of an equity is a random variable, the distribution of which is usually not fully known. The essential element of this probability law is of course its expectation:5 the expected return Ej = E(Rj ). This is an ex ante mean, which as such is inaccessible. For this reason, it is estimated on the basis of available historical observations, calculated for the last T periods. Such an ex post estimator, which relates to historical data, is termed ergodic. The estimator for the expected return on the security (j ) is therefore: Rj =

T 1 Rj t T t=1

In the same way,for a portfolio, the expected return equals: t EP = E(RP ) = N j =1 Xj Ej = X E, introducing the X and E vectors for the proportions and expected returns on N securities: E1 X1 E2 X2 E= . X= . .. .. XN EN The associated ergodic estimator is thus given by: RP =

T N 1 RP t = Xj R j . T t=1 j =1

In the following theoretical developments, we will use the probability terms (expectation) although it is acknowledged that for practical calculations, the statistical terms (ergodic estimator) should be used. 5 From here on, we will use the index t not for the random return variable relative to period t, but for referencing a historical observation (the realised value of the random variable).

Equities

41

3.1.1.5 Risk of one equity The performance of an equity cannot be measured on the basis of its expected return only. Account should also be taken of the magnitude of ﬂuctuations of this return around its mean value, as this magnitude is a measurement of the risk associated with the security in question. The magnitude of variations in a variable around its average is measured using dispersion indices. Those that are adopted here are the variance σj2 and the standard deviation σj of the return: σj2 = var(Rj ) = E[(Rj − Ej )2 ] = E(Rj2 ) − Ej2 In practice, this is evaluated using its ergodic estimator: sj2 =

T T 1 1 2 2 (Rj t − R j )2 = R − Rj T t=1 T t=1 j t

Note Two typical values are currently known for the return on an equity: its (expected) return and its risk. With regard to the distribution of this random variable, if it is possible to accept a normal distribution, then no other parameter will be needed as the law of probability is characterised by its average and its standard deviation. The reason for the omnipresence of this distribution is the central limit theorem (CLT), which requires the variable in question to be the sum of a very large number of ‘small’ independent effects. This is probably the reason why (number of transactions) it is being noted empirically that returns relating to long periods (a month or a year) are often normally distributed, while this is not necessarily the case for daily returns, for example. In these cases, we generally observe distributions with fatter tails6 than those under the normal law. We will examine this phenomenon further in Part III, as value at risk is particularly interested in these distribution tails. However, we will consider in this part that the distribution of the return is characterised by the ‘expected return-risk’ couple, which is sufﬁcient for the Markowitz portfolio theory.7 In other cases (dynamic models), it will be supposed in addition that this is normal. Other dispersion indices could be used for measuring risk, as mean deviation E(|Rj − Ej |) or semi-variance, which is deﬁned as the variance but takes account only of those return values that are less than the expected return. It is nevertheless the variance (and its equivalent, the standard deviation) that is almost always used, because of its probability-related and statistical properties, as will be seen in the deﬁnition of portfolio risk. 3.1.1.6 Covariance and correlation The risk of a portfolio depends of course on the risk of the securities of which it is composed, but also on the links present between the various securities, through the effect 6 7

This is referred to as leptokurtic distribution. Markowitz H., Portfolio selection, Journal of Finance, Vol. 7, No. 1, 1952, pp. 419–33.

42

Asset and Risk Management

of diversiﬁcation. The linear dependence between the return of the security (i) and its security (j ) is measured by the covariance: σij = cov(Ri , Rj ) = E(Ri − Ei )(Rj − Ej ) = E(Ri Rj ) − Ei Ej This is evaluated by the ergodic estimator sij =

T T 1 1 (Rit − R i )(Rj t − R j ) = (Rit Rj t ) − R i R j T t=1 T t=1

The interpretation of the covariance sign is well known, but its order of magnitude is difﬁcult to express. To avoid this problem, we use the correlation coefﬁcient ρij = corr(Ri , Rj ) =

σij σi · σj

For this coefﬁcient, the ergodic estimator is of course given by rij =

sij si · sj

Remember that this last parameter is a pure number located between −1 and 1, of which the sign indicates the way of dependency between the two variables and the values close to ±1 correspond to near-perfect linear relations between the variables. 3.1.1.7 Portfolio risk

If one remembers that RP ,t = N j =1 Xj Rj t , and given that the formula for the variance of a linear combination of random variables, the variance of the return on the portfolio takes the following form: σP2

= var(RP ) =

N N

Xi Xj σij = Xt V X

i=1 j =1

Here: σii = σi2 and one has determined

X1 X2 X= . .. XN

σ12 σ21 V = . .. σN1

σ12 σ22 .. . σN2

· · · σ1N · · · σ2N .. .. . . · · · σN2

If one wishes to show the correlation coefﬁcients, the above formula becomes: σP2 =

N N i=1 j =1

Xi Xj σi σj ρij

Equities

43

Example The risk of a portfolio consisting of two equities in respective proportions, 30 % and 70 %, and such that σ12 = 0.03, σ22 = 0.02, σ12 = 0.01, is calculated regardless by: σP2 = 0.32 · 0.03 + 0.72 · 0.02 + 2 · 0.3 · 0.7 · 0.01 = 0.0167, or by: 0.03 0.01 0.3 2 = 0.0167. σP = 0.3 0.7 0.7 0.01 0.02 It is interesting to compare the portfolio risk with the individual security risk. The ‘expected return-risk’ approach to the portfolio therefore requires a knowledge of the expected returns and individual variances as well as all the covariances two by two. Remember that the multi-normal distribution is characterised by these elements, but that Markowitz’s portfolio theory does not require this law of probability. 3.1.1.8 Security risk within a portfolio The portfolio risk can also be written as: σP2 =

N N

Xi Xj σij =

i=1 j =1

N

Xi

i=1

N

Xj σij

j =1

The total risk for the security (i) within the portfolio therefore depends on σi2 but also on the covariances with other securities in the portfolio. It can be developed as follows: N

Xj σij =

j =1

N j =1

Xj cov(Ri , Rj )

= cov Ri ,

N

Xj Rj

j =1

= cov(Ri , RP ) = σiP The relative importance of the total risk for the security (i) in the portfolio risk is therefore measured by: N Xj σij σiP j =1 = . σP2 σP2 These relative risks are such as: N i=1

Xi

σiP = 1. σP2

44

Asset and Risk Management

Example Using the data in the previous example, the total risks for the two securities within the portfolio are given as:

σ1P = 0.3 · 0.03 + 0.7 · 0.01 = 0.016 σ2P = 0.3 · 0.01 + 0.7 · 0.02 = 0.017

The corresponding relative risks therefore total 0.958 and 1.018 respectively. Note that what we actually have is: 0.3 · 0.958 + 0.7 · 1.018 = 1. The concept of the relative risk applied to the market as a whole or to a particular portfolio leads us to the concept of systematic risk : βi =

σiM 2 σM

It therefore represents the relative importance of the total security risk (i) in the market risk, that is, the volatility of Ri in relation to RM , as the quotient in question is the slope of the regression line in which the return on the security (i) is explained by the return of the market (see Figure 3.1): Ri = αi + βi RM It can be accepted, in conclusion, that the risk of a particular security should never be envisaged in isolation from the rest of the portfolio in which it is included. 3.1.2 Market efﬁciency Here follows a brief summary of the concept of market efﬁciency,8 which is a necessary hypothesis (or one that must be at least veriﬁed approximately) for the validity of the various models of ﬁnancial analysis and is closely linked to the concept of the ‘perfect market’.

Ri • • •

• •

•

•

•

•

RM

Figure 3.1 Systematic risk 8

A fuller treatment of this subject is found in Gillet P., L’Efﬁcience Des March´es Financiers, Economica, 1999.

Equities

45

3.1.2.1 General principles It was Eugene Fama9 who explicitly introduced the concept of ‘efﬁciency’. The deﬁnition that he gave to the concept was as follows: ‘A ﬁnancial market is said to be efﬁcient if, and only if, all the available information on each ﬁnancial asset quoted on the market is immediately included in the price of that asset’. Indeed, he goes so far as to say that there is no overvaluation or undervaluation of securities, and also that no asset can produce a return greater than that which corresponds to its own characteristics. This hypothesis therefore guarantees equality of treatment of various investors: no category of investor has any informational advantage. The information available on this type of market therefore allows optimum allocation of resources. The economic justiﬁcation for this concept is that the various investors, in competition and possessing the same information, will, through their involvement and because of the law of supply and demand, make the price of a security coincide with its intrinsic value. We are of course looking at a hypothesis that divides the supporters of fundamental analysis from the supporters of technical analysis. The former accept the hypothesis and indeed make it the entire basis for their reasoning; they assume that returns on securities are unpredictable variables and propose portfolio management techniques that involve minimising the risks linked to these variables.10 The latter propose methods11 that involve predicting courses on the basis of historically observed movements. From a more mathematical point of view, market efﬁciency consists of assuming that the prices will follow a random walk, that is, that the sequence Ct − Ct−1 (t = 1, 2, . . .) consists of random variables that are independent and identically distributed. In these circumstances, such a variation can only be predicted on the basis of available observations. The economic conditions that deﬁne an efﬁcient market are: • The economic agents involved on the market behave rationally; they use the available information coherently and aim to maximise the expected utility of their wealth. • The information is available simultaneously to all investors and the reaction of the investors to the information is instantaneous. • The information is available free of charge. • There are no transaction costs or taxes on the market. • The market in question is completely liquid. It is obvious that these conditions can never be all strictly satisﬁed in a real market. This therefore raises the question of knowing whether the differences are signiﬁcant and whether they will have the effect of invalidating the efﬁciency hypothesis. This question is addressed in the following paragraphs, and the analysis is carried out at three levels according to the accessibility of information. The least that can be said is that the conclusions of the searches carried out in order to test efﬁciency are inconclusive and should not be used as a basis for forming clear and deﬁnitive ideas. 9 Fama E. F., Behaviour of Stock Market Prices, Journal of Business, Vol. 38, 1965, pp. 34–105. Fama E. F., Random Walks in Stock Market Prices, Financial Analysis Journal, 1965. Fama E. F., Efﬁcient Capital Markets: A Review of Theory and Empirical Work, Journal of Finance, Vol. 25, 1970. 10 This approach is adopted in this work. 11 Refer for example to Bechu T. and Bertrand E., L’Analyse Technique, Economica, 1998.

46

Asset and Risk Management

3.1.2.2 Weak form The weak form of the efﬁciency hypothesis postulates that it is not possible to gain a particular advantage from the range of historical observations; the rates therefore purely and simply include the previous rate values. The tests applied in order to verify this hypothesis relate to the possibility of predicting rates on the basis of their history. Here are a few analyses carried out: • The autocorrelation test. Is there a correlation (positive or negative) between the successive return on security values that allows forecasts to be made? • The run test. Is the distribution of the sequence lengths for positive returns and negative returns normal? • Statistical tests for random walk. • Simulation tests for technical analysis methods. Do the speculation techniques give better results than passive management? Generally speaking, most of these tests lead to acceptance of the weak efﬁciency hypothesis, even though the most demanding tests from the statistical viewpoint sometimes invalidate it. 3.1.2.3 Semi-strong form The semi-strong form of the efﬁciency hypothesis postulates that it is not possible to gain a particular advantage from information made public in relation to securities; the rates therefore change instantly and correctly when an event such as an increase in capital, division of securities, change of dividend policy, balance sheet publication or take-over bid is announced publicly. The tests carried out to verify this hypothesis therefore relate to the effects of the events announced. They consist successively of: • Determining the theoretical return on a security Rit = αi + βi RMt on the basis of historical observations relating to a period that does not include such events. • When such an event occurs, comparing the difference between the theoretical return and the real return. • Measuring the reaction time in order for the values to be altered again. 3.1.2.4 Strong form The strong form of the efﬁciency hypothesis postulates that it is not possible to gain a particular advantage from nonpublic information relating to securities; the rates therefore change instantly and correctly when an event that is not public, that is, an insider event, occurs. The tests carried out to verify this hypothesis therefore relate to the existence of privileged information. They follow a method similar to that used for the semi-strong form, but in speciﬁc circumstances: • In recognised cases of misdemeanour by an initiated person.

Equities

47

• In cases of intensive trading on a market without the public being informed. • In cases of intensive trading on the part of initiated persons. • In cases of portfolios managed by professionals likely to have speciﬁc information before the general public has it, as in collective investment organisations. 3.1.2.5 Observed case of systematic inefﬁciency Although the above analyses suggest that the efﬁciency hypothesis can be globally accepted, cases of systematic inefﬁciency have been discovered. In these cases, the following have sometimes been observed: • Higher than average proﬁtability at the end of the week, month or year. • Higher proﬁtability for low equity market capitalisation businesses than for high capitalisation companies. Alongside these differences, pockets of inefﬁciency allowing arbitrage may present themselves. Their origin may be: • Speculative bubbles, in which the rate of a security differs signiﬁcantly and for a long time from its intrinsic value before eventually coming back to its intrinsic value, without movements of the market economic variables as an explanation for the difference. • Irrational behaviour by certain investors. These various elements, although removed from the efﬁciency hypothesis, do not, however, bring it into question. In addition, the proﬁt to investors wishing to beneﬁt from them will frequently be lost in transaction costs. 3.1.2.6 Conclusion We quote P. Gillet in conclusion of this analysis. Financial market efﬁciency appears to be all of the following: an intellectual abstraction, a myth and an objective. The intellectual abstraction. Revealed by researchers, the theory of ﬁnancial market efﬁciency calls into question a number of practices currently used by the ﬁnancial market professionals, such as technical analysis. (. . .) It suggests a passive management, while technical analysis points towards an active management. (. . .) In addition, it is one of the basic principles of modern ﬁnancial theory. (. . .). The myth. All the hypotheses necessary for accepting the theory of efﬁciency are accepted by the theory’s supporters. In addition to the classic hypotheses on circulation of information or absence of transaction costs, which have been addressed, other underlying hypotheses have as yet been little explored, especially those linked to the behaviour of investments and to liquidity. (. . .). An objective. The market authorities are aware that the characteristics of efﬁciency make the market healthy and more credible, and therefore attract investors and businesses. To make a

48

Asset and Risk Management market more efﬁcient is to reduce the risk of the speculation bubble. (. . .). The aim of the authorities is therefore to improve the efﬁciency of the ﬁnancial markets (. . .).

3.1.3 Equity valuation models The principle of equivalence, the basis of ﬁnancial mathematics, allows the expression that the intrinsic value V0 of an equity at the moment 0 is equal to the discounted values of the future ﬁnancial ﬂows that the security will trigger. Put more simply, if one assumes that the dividends (future ﬁnancial ﬂows) are paid for periods 1, 2 etc. and have a respective total of D1 , D2 etc., and if the discount rate k is included, we will obtain the relation: ∞ V0 = Dt (1 + k)−t t=1

Note 1 The direct use of this relation can be sensitive. In fact: • The value of all future dividends is not generally known. • This formula assumes a constant discount rate (ad inﬁnitum). • It does not allow account to be taken of speciﬁc operations such as division or regrouping of equities, free issues or increases in capital. The formula does, however, provide a number of services and later we will introduce a simpliﬁed formula that can be obtained from it. Note 2 This formula, which links V0 and k, can be used in two ways: • If V0 is known (intrinsic value on an efﬁcient market), the value of k can be deduced from it and will then represent the expected return rate for the security in question. • If k is given, the formula provides an assessment of the security’s value, which can then be compared to the real rate C0 , thus allowing overevaluation or underestimation of the security to be determined. 3.1.3.1 The Gordon–Shapiro formula This relation12 is based on the following hypotheses: • The growth of the ﬁrm is self-ﬁnancing. • The rate of return r of the investments, and the rate of distribution d of the proﬁts, are constant from one period to the next. 12 See Gordon M. and Shapiro E., Capital equipment analysis: the required rate proﬁt, Management Science, Vol. 3, October 1956.

Equities

49

Under these hypotheses, if Bt is ﬁxed as the proﬁt for each action sold during the period t and Et is the accounting value per equity at the moment t (capital divided by number of equities), we have: Dt = d Bt Bt = r · Et−1 And therefore: Bt+1 = Bt + r · (Bt − Dt ) = Bt [1 + r(1 − d)] The proﬁts therefore increase at a constant rate g = r(1 − d), which is the rate of profitability of the investments less the proportion distributed. The dividends also increase at this constant rate and it is possible to write Dt+1 = g.Dt , hence: Dt = D1 (1 + g)t−1 . The present value can therefore be worked out as follows: V0 =

∞

D1 (1 + g)t−1 (1 + k)−t

t=1

=

∞ D1 1 + g t 1 + k t=0 1 + k

D1 = 1+k 1+g 1− 1+k This is provided the discount rate k is greater than the rate of growth g. This leads to the Gordon–Shapiro formula: V0 =

D1 dB1 drE0 = = k−g k−g k−g

Example The capital of a company consists of 50 000 equities, for a total value of 10 000 000. The investment proﬁtability rate is 15 %, the proﬁt distribution rate 40 %, and the discount rate 12 %. The proﬁt per equity will be: B = 0.15 ·

10 000 000 = 30 50 000

The dividend per equity will therefore be D = 0.4 × 30 = 12. In addition, the rate of growth is given as follows: g = 0.15 × (1 − 0.4) = 0.09.

50

Asset and Risk Management

The Gordon–Shapiro formula therefore leads to: V0 =

12 12 = = 400 0.12 − 0.09 0.03

The market value of this company is therefore 50 000 × 400 = 20 000 000, while its accounting value is a mere 10 000 000. D1 , which shows that V0 the return k can be broken down into the dividend growth rate and the rate of payment per security. The Gordon–Shapiro formula produces the equation k = g +

3.1.3.2 The price-earning ratio One of the most commonly used evaluation indicators is the PER. It equals the ratio of the equity rate to the expected net proﬁt for each equity: PER 0 =

C0 B1

Its interpretation is quite clear: when purchasing an equity, one pays PER 0 × ¤1 for a proﬁt of ¤1. Its inverse (proﬁt over price) is often considered as a measurement of returns on securities, and securities whose PER is below the market average are considered to be undervalued and therefore of interest. This indicator can be interpreted using the Gordon–Shapiro formula, if the hypotheses relative to the formula are satisﬁed. In fact, by replacing the rate with the V0 value given for this formula: dB1 D1 C0 = = k−g k − r(1 − d) we arrive directly at: PER 0 =

d k − r(1 − d)

This allows the following expression to be obtained for the rate of return k: d PER 0 1 1−d = r(1 − d) + − PER 0 PER 0

k = r(1 − d) +

As PER 0 = C0 /rE0 , we ﬁnd that: k=

r(1 − d)(C0 − E0 ) 1 + PER 0 C0

Example If one takes the same ﬁgures as in the previous paragraph:

Equities

51

r = 15 % d = 40 % 10 000 000 = 200 E0 = 50 000 and the effectively observed price is 360, we arrive at: PER 0 = This allows the rate of output13 to be determined as follows:

360 = 12. 30

1 0.15 · (1 − 0.4) · (360 − 200) + 12 360 = 0.0833 + 0.04

k=

= 12.33 %

3.2 PORTFOLIO DIVERSIFICATION AND MANAGEMENT 3.2.1 Principles of diversiﬁcation Putting together an optimum equity portfolio involves an answer to the following two questions, given that a list of N equities is available on the market, • Which of these equities should I choose? • In what quantity (number or proportion)? The aim is to look for the portfolio that provides the greatest return. This approach would logically lead to holding a portfolio consisting of just one security, that with the greatest expected return. Unfortunately, it misses out the risk aspect completely and can lead to a catastrophe scenario if the price for the adopted security falls. The correlations between the returns on the various available securities can, on the other hand, help compensate for the ﬂuctuations in the various portfolio components. This, in sharp contrast to the approach described above, can help reduce the portfolio risk without reducing its expected return too much. It is this phenomenon that we will be analysing here and use at a later stage to put together an optimum portfolio. 3.2.1.1 The two-equity portfolio According to what was stated above, the expected return and variance for a two-equity portfolio represented in proportions14 X1 and X2 are given as follows:

EP = X1 E1 + X2 E2 σP2 = X12 σ12 + X22 σ22 + 2X1 X2 σ1 σ2 ρ

13 14

Of course, if the rate had been equal to the intrinsic value V0 = 400, we arrive at k = 12 %. It is implicitly supposed in this paragraph that the proportions are between 0 and 1, that is to say, there are no short sales.

52

Asset and Risk Management

In order to show clearly the effect of diversiﬁcation (the impact of correlation on risk), let us ﬁrst consider the case in which the two securities have the same expected return (E1 = E2 = E) and the same risk (σ1 = σ2 = σ ). Since X1 + X2 = 1, the equations will become: EP = E σP2 = (X12 + X22 + 2X1 X2 ρ)σ 2 The expected return on the portfolio is equal to that on the securities, but the risk is lower because the maximum value that it can take corresponds to ρ = 1 for which σP = σ and when ρ < 1, σP < σ . Note that in the case of a perfect negative correlation (ρ = −1), the risk can be written as σP2 = (X1 − X2 )2 σ 2 . This cancels itself out if one chooses X1 = X2 = 1/2; in this case, the expected return is retained but the risk is completely cancelled. Let us now envisage the more general case in which the expected return and the risk is of whatever quantity. An equity is characterised by a couple (Ei , σi ) for i = 1 or 2 and can therefore be represented as a point in space (E, σ ); of course the same applied for the portfolio, which corresponds to the point (EP , σP ). Depending on the values given to X1 (and therefore to X2 ), the representative point for the portfolio will describe a curve in (E, σ ) plane. Let us now study in brief the shape of the curve with respect to the values for the correlation coefﬁcient ρ. When ρ = 1, the portfolio variance15 becomes σP2 = (X1 σ1 + X2 σ2 )2 . By eliminating X1 and X2 from the three equations EP = X1 E1 + X2 E2 σP = X1 σ1 + X2 σ2 X1 + X2 = 1 we arrive at the relation σP =

EP − E2 E1 − EP σ1 + σ2 E1 − E2 E1 − E2

This expresses σP as a function of EP , a ﬁrst-degree function, and the full range of portfolios is therefore the sector of the straight line that links the representative points for the two securities (see Figure 3.2). E

E • (2)

(1) •

• (1)

• (2) σ

Figure 3.2 Two-equity portfolio (ρ = 1 case) 15

Strictly speaking, one should say ‘the portfolio return variance’.

σ

Equities

53

Faced with the situation shown on the left, the investor will choose a portfolio located on the sector according to his attitude to the matter of risk: portfolio (1) will give a low expected return but present little risk, while portfolio (2) is the precise opposite. Faced with a situation shown on the right-hand graph, there is no room for doubting that portfolio (2) is better than portfolio (1) in terms of both expected return and risk incurred. When ρ = −1, the variance in the portfolio will be: σP2 = (X1 σ1 − X2 σ2 )2 . In other words, σP = |X1 σ1 − X2 σ2 |. Applying the same reasoning as above leads to the following conclusion: the portfolios that can be constructed make up two sectors of a straight line from points (1) and (2), meet together at a point on the vertical axis (σ = 0), and have equal slopes, excepted the sign (see Figure 3.3). Of these portfolios, of course, only those located in the upper sector will be of interest; those in the lower sector will be less attractive from the point of view of both risk and expected return. In the general case, −1 < ρ < 1, and it can be shown that all the portfolios that can be put together form a curved arc that links points (1) and (2) located between the extreme case graphs for ρ = ±1, as shown in Figure 3.4. If one expresses σP2 as a function of EP , as was done in the ρ = 1 case, a seconddegree function is obtained. The curve obtained in the (E, σ ) plane will therefore be a hyperbolic branch. The term efﬁcient portfolio is applied to a portfolio that is included among those that can be put together with two equities and cannot be improved from the double viewpoint of risk and expected return. Graphically, we are looking at portfolios located above contact point A16 of the vertical tangent to the portfolio curve. In fact, between A and (2), it is not possible to improve E • (2)

• (1)

σ

Figure 3.3 Two-equity portfolio (ρ = −1 case) E • (2)

A • (1) σ

Figure 3.4 Two-equity portfolio (general case) 16

This contact point corresponds to the minimum risk portfolio.

54

Asset and Risk Management

EP without increasing the risk or to decrease σP without reducing the expected return. In addition, any portfolio located on the arc that links A and (1) will be less good than the portfolios located to its left. 3.2.1.2 Portfolio with more than two equities A portfolio consisting of three equities17 can be considered as a mixture of one of the securities and a portfolio consisting of the two others. For example, a portfolio with the composition X1 = 0.5, X2 = 0.2 and X3 = 0.3 can also be considered to consist of security (1) and a portfolio that itself consists of securities (2) and (3) at rates of 40 % and 60 % respectively. Therefore, for the ﬁxed covariances σ12 , σ13 and σ23 , the full range of portfolios that can be constructed using this process corresponds to a continuous range of curves as shown in Figure 3.5. All the portfolios that can be put together using three or more securities therefore form an area within the plane (E, σ ). The concept of ‘efﬁcient portfolio’ is deﬁned in the same way as for two securities. The full range of efﬁcient portfolios is therefore the part of the boundary of this area limited by security (1) and the contact point of the vertical tangent to the area, corresponding to the minimum risk portfolio. This arc curve is known as the efﬁcient frontier. The last part of this Section 3.2 is given over to the various techniques used to determine the efﬁcient frontier, according to various restrictions and hypotheses. An investor’s choice of a portfolio on the efﬁcient frontier will be made according to his attitude to risk. If he adopts the most cautious approach, he will choose the portfolio located at the extreme left point of the efﬁcient frontier (the least risky portfolio, very diversiﬁed), while a taste for risk will move him towards the portfolios located on the right part of the efﬁcient frontier (acceptance of increased risk with hope of higher return, generally obtained in portfolios made up of a very few proﬁtable but highly volatile securities).18

E (1)

(2)

(3) σ

Figure 3.5 Three-equity portfolio 17 The passage from two to three shares is a general one: the results obtained are valid for N securities. The attached CD-ROM shows some more realistic examples of the various models in the Excel sheets contained in the ‘Ch 3’ directory. 18 This question is examined further in Section 3.2.6.

Equities

55

3.2.2 Diversiﬁcation and portfolio size We have just seen that diversiﬁcation has the effect of reducing the risk posed by a portfolio through the presence of various securities that are not perfectly correlated. Let us now examine the limits of this diversiﬁcation; up to what point, for a given correlation structure, can diversiﬁcation reduce the risk? 3.2.2.1 Mathematical formulation To simplify the analysis, let us consider a portfolio of N securities in equal proportions: Xj =

1 N

j = 1, . . . , N

The portfolio risk can therefore be developed as: σP2 =

N N

Xi Xj σij

i=1 j =1

N N N 1 2 = 2 σi + σij N j =1 i=1

i=1

j =i

This double sum contains N (N − 1) terms, and it is therefore natural to deﬁne the average variance and the average covariance as: N 1 2 var = σ N i=1 i N

cov =

N

1 σij N (N − 1) i=1 j =1 j =i

As soon as N reaches a sufﬁcient magnitude, these two quantities will almost cease to depend on N . They will then allow the portfolio variance to be written as follows: σP2 =

1 N −1 var + cov N N

3.2.2.2 Asymptotic behaviour When N becomes very large, the ﬁrst term will decrease back towards 0 while the second, now quite stable, converges towards cov. The portfolio risk, despite being very diversiﬁed, never falls below this last value, which corresponds to: N −1 1 2 cov = lim var + cov = lim σP2 = σM N−→∞ N N−→∞ N In other words, it corresponds to the market risk.

56

Asset and Risk Management sP2

cov

N

Figure 3.6 Diversiﬁcation and portfolio size

The behaviour of the portfolio variance can be represented according to the number of securities by the graph shown in Figure 3.6. The effects of diversiﬁcation are initially very rapid (the ﬁrst term loses 80 % of its value if the number of securities increases from 1 to 5) but stabilise quickly somewhere near the cov value. 3.2.3 Markowitz model and critical line algorithm 3.2.3.1 First formulation The efﬁcient frontier is the ‘North-West’ part of the curve, consisting of portfolios deﬁned by this principle: for each ﬁxed value r of EP , the proportions for which σP2 is minimal Xj (j = 1, . . . , N ) are determined. The efﬁcient frontier is this deﬁned by giving r all the possible values. Mathematically, the problem is therefore presented as a search for the minimum with respect to X1 , . . . , XN of the function: σP2 =

N N

Xi Xj σij

i=1 j =1

under the double restriction:

N Xj Ej = r j =1 N

Xj = 1

j =1

The Lagrangian function19 for the problem can thus be written as: L(X1 , . . . , XN ; m1 , m2 ) =

N N

N N Xi Xj σij + m1 · X j E j − r + m2 · Xj − 1

i=1 j =1

19

Please refer to Appendix 1 for the theory of extrema.

j =1

j =1

Equities

57

Taking partial derivatives with respect to the variables X1 , . . . , Xn and to the Lagrange multipliers m1 and m2 leads to the system of N + 2 equations with N + 2 unknowns: N L (X1 , . . . , XN ; m1 , m2 ) = 2 Xi σij + m1 Ej + m2 = 0 (j = 1, . . . , N ) Xj i=1 N Xi Ei − r = 0 L m1 (X1 , . . . , XN ; m1 , m2 ) = i=1 N L (X , . . . , X ; m , m ) = Xi − 1 = 0 1 N 1 2 m2 i=1

This can be written in a 2σ12 2σ21 . .. 2σ N1 E1 1

matrix form: 2σ12 2σ22 .. . 2σN2 E2 1

· · · 2σ1N · · · 2σ2N .. .. . . · · · 2σN2 · · · EN ··· 1

E1 E2 .. . EN . .

1 . X1 X2 . 1 . . .. . . . . =. 1 XN . r m1 . 1 m2 .

By referring to the matrix of coefﬁcients,20 the vector of unknowns21 and the vector of second members as M, X∗ and G respectively, we give the system the form MX∗ = G. The resolution of this system passes through the inverse matrix of M:X∗ = M −1 G. Note 1 In reality, this vector only supplies one stationary point of the Lagrangian function; it can be shown (although we will not do this here) that it constitutes the solution to the problem of minimisation that is concerning us. Note 2 This relation must be applied to the different possible values for r to ﬁnd the frontier, of which only the efﬁcient (‘North-West’) part will be retained. The interesting aspect of this result is that if r is actually inside the vector G, it does not appear in the matrix M, which then has to be inverted only once.22 Example We now determine the efﬁcient frontier that can be constructed with three securities with the following characteristics: E1 = 0.05 σ1 = 0.10 ρ12 = 0.3

E2 = 0.08 σ2 = 0.12 ρ13 = 0.1

E3 = 0.10 σ3 = 0.15 ρ23 = 0.4

In its order N zone of the upper left corner, this contains the 2V matrix in which V is the variance–covariance matrix. The vector of unknowns does not contain the proportions only; it also involves the Lagrange multipliers (which will not be of use to us later). For this reason we will use the notation X∗ instead of X (which is reserved for the vector of proportions). This remark applies to all the various models developed subsequently. 22 The attached CD-ROM contains a series of more realistic examples of the various models in an Excel ﬁle known as Ch 3. 20 21

58

Asset and Risk Management

The variance–covariance matrix is given by: 0.0100 0.0036 V = 0.0036 0.0144 0.0015 0.0072 The matrix M is therefore equal to: 0.0200 0.0072 0.0072 0.0288 M = 0.0030 0.0144 0.05 0.08 1 1 This matrix inverts to:

0.0015 0.0072 0.0225

0.0030 0.05 1 0.0144 0.08 1 0.0450 0.10 1 0.010 . . 1 . .

31.16 −24.10 − 7.06 0.57 −24.10 40.86 −16.76 0.24 M −1 = − 7.06 −16.76 23.82 0.19 0.57 0.24 0.19 −0.01 .. . By applying this matrix to the vector G = r , for different values of r, we ﬁnd a 1 range of vectors X∗ , the ﬁrst three components of which supply the composition of the portfolios (see Table 3.2). These proportions allow σP to be calculated23 for the various portfolios (Table 3.3). It is therefore possible, from this information, to construct the representative curve for these portfolios (Figure 3.7). Table 3.2

Composition of portfolios

r 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 0.11 0.12 0.13 0.14 0.15 23

X1

X2

X3

2.1293 1.8956 1.6620 1.4283 1.1946 0.9609 0.7272 0.4935 0.2598 0.0262 −0.2075 −0.4412 −0.6749 −0.9086 −1.1423 −1.3759

−0.3233 −0.2391 −0.1549 −0.0707 0.0135 0.0933 0.1820 0.2662 0.3504 0.4346 0.5188 0.6030 0.6872 0.7714 0.8556 0.9398

−0.8060 −0.6565 −0.5071 −0.3576 −0.2801 −0.0586 0.0908 0.2403 0.3898 0.5392 0.6887 0.8382 0.9877 1.1371 1.2866 1.4361

The expected return is of course known.

Equities Table 3.3

59

Calculation of σP

EP

σP

0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 0.11 0.12 0.13 0.14 0.15

0.2348 0.2043 0.1746 0.1465 0.1207 0.0994 0.0857 0.0835 0.0937 0.1130 0.1376 0.1651 0.1943 0.2245 0.2554 0.2868

0.16 0.14 Expected return

0.12 0.1 0.08 0.06 0.04 0.02 0

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

Standard deviation

Figure 3.7 Efﬁcient frontier

The efﬁcient part of this frontier is therefore the ‘North-West’ part, the lower limit of which corresponds to the minimum risk portfolio. For this portfolio, we have values of EP = 0.0667 and σP = 0.0828. The method just presented does not require the proportions to be positive. Moreover, a look at the preceding diagram will show that negative values (and values over 1) are sometimes obtained, as the ‘classic’ portfolios (0 ≤ Xj ≤ 1 for any j ) correspond only to expected return values between 0.06 and 0.09. A negative value for a proportion corresponds to a short sale. This type of transaction, which is very hazardous, is not always authorised, especially in the management of investment funds. Symmetrically, a proportion of over 1 indicates the purchase of a security for an amount greater than the total invested. In addition, many portfolios contain regulatory or internal restrictions stating that certain types of security cannot be represented for a total over a ﬁxed percentage. In this case, the problem must be resolved by putting together portfolios in which proportions of the

60

Asset and Risk Management

type Bj− ≤ Xj ≤ Bj+ for j = 1, . . . , N are subject to regulations. We will examine this problem at a later stage. 3.2.3.2 Reformulating the problem We now continue to examine the problem without any regulations on inequality of proportions. We have simply altered the approach slightly; it will supply the same solution but can be generalised more easily into the various models subsequently envisaged. If instead of representing the portfolios graphically by showing σP as the x-axis and Ep as the y-axis (as in Figure 3.7), EP is now shown as the x-axis and σP2 as the y-axis, the efﬁcient frontier graph now appears as shown in Figure 3.8. A straight line in this graph has the equation σ 2 = a + λE in which a represents the intercept and λ the slope of the straight line. We are looking speciﬁcally at a straight line at a tangent to the efﬁcient frontier. If the slope of this straight line is zero (λ = 0), the contact point of the tangent shows the least risky portfolio in the efﬁcient frontier. Conversely, the more λ increases, the further the contact point moves away from the efﬁcient frontier towards the risky portfolios. The λ parameter may vary from 0 to +∞ and is therefore representative of the portfolio risk corresponding to the contact point of the tangent with this λ value for a slope. For a ﬁxed λ value, the tangent to the efﬁcient frontier with slope λ is, of all the straight lines with that slope and with at least one point in common with the efﬁcient frontier, that which is located farthest to the right, that is, the one with the smallest coordinate at the origin a = σ 2 − λE. The problem is therefore reformulated as follows: for the various values of λ between 0 and ∞, minimise with respect to the proportions X1 , . . . , XN the expression: σP2 − λEP =

N N i=1 j =1

Xi Xj σij − λ

N

Xj Ej

j =1

under the restriction N j =1 Xj = 1. Once the solution, which will depend on λ, has been found, it will be sufﬁcient to make this last parameter vary between 0 and +∞ to arrive at the efﬁcient frontier.

sP2

EP a

Figure 3.8 Reformulation of problem

Equities

61

The Lagrangian function for the problem can be written as: L(X1 , . . . , XN ; m) =

N N

Xi Xj σij − λ

i=1 j =1

N

Xj Ej + m ·

j =1

N

Xj − 1

j =1

A reasoning similar to that used in the ﬁrst formulation allows the following matrix expression to be deduced from the partial derivatives: MX∗ = λE ∗ + F Here, it has been noted that24 2σ12 2σ12 · · · 2σ1N 2σ21 2σ 2 · · · 2σ2N 2 .. .. .. M = ... . . . 2σ · · · 2σN2 N1 2σN2 1 1 ··· 1

1 1 .. . 1 .

X1 X2 X∗ = ... XN m

E1 E2 E ∗ = ... EN .

. . F = ... . 1

The solution to this system of equations is therefore supplied by: X∗ = λ(M −1 E ∗ ) + (M −1 F ). As for the ﬁrst formulation, the matrix M is independent of the parameter λ, which must be variable; it only needs to be inverted once. Example Let us take the same data as those used in the ﬁrst formulation, namely: E1 = 0.05 σ1 = 0.10 ρ12 = 0.3

E2 = 0.08 σ2 = 0.12 ρ13 = 0.1

The same variance–covariance matrix V expressed as: 0.0200 0.0072 M= 0.0030 1

E3 = 0.10 σ3 = 0.15 ρ23 = 0.4

as above will be used, and the matrix M can be 0.0072 0.0288 0.0144 1

0.0030 1 0.0144 1 0.0450 1 1 .

This matrix inverts to: 31.16 −24.10 − 7.06 0.57 −24.10 40.86 −16.76 0.24 = − 7.06 −16.76 23.82 0.19 0.57 0.24 0.19 −0.01

M −1

24 In the same way as the function carried out for X∗ , we are using the E ∗ notation here as E is reserved for the N-dimensional vector for the expected returns.

62

Asset and Risk Management Table 3.4

Solutions for different values of λ

λ 2.0 1.9 1.8 1.7 1.6 1.5 1.4 1.3 1.2 1.1 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0

X1

X2

X3

EP

σP

−1.5810 −1.4734 −1.3657 −1.2581 −1.1505 −1.0429 −0.9353 −0.8276 −0.7200 −0.6124 −0.5048 −0.3972 −0.2895 −0.1819 −0.0743 0.0333 0.1409 0.2486 0.3562 0.4638 0.5714

1.0137 0.9750 0.9362 0.8974 0.8586 0.8198 0.7810 0.7423 0.7035 0.6647 0.6259 0.5871 0.5484 0.5096 0.4708 0.4320 0.3932 0.3544 0.3157 0.2769 0.2381

1.5672 1.4984 1.4296 1.3607 1.2919 1.2231 1.1542 1.0854 1.0165 0.9477 0.8789 0.8100 0.7412 0.6723 0.6035 0.5437 0.4658 0.3970 0.3282 0.2593 0.1905

0.0588 0.1542 0.1496 0.1450 0.1404 0.1357 0.1311 0.1265 0.1219 0.1173 0.1127 0.1081 0.1035 0.0989 0.0943 0.0897 0.0851 0.0805 0.0759 0.0713 0.0667

0.3146 0.3000 0.2854 0.2709 0.2565 0.2422 0.2280 0.2139 0.2000 0.1863 0.1729 0.1597 0.1470 0.1347 0.1231 0.1123 0.1027 0.0945 0.0882 0.0842 0.0828

· 0.05 0.08 · , the solutions F = As the vectors E ∗ and F are given by E ∗ = · 0.10 1 . to the problem for the different values of λ are shown in Table 3.4. The efﬁcient frontier graph then takes the form shown in Figure 3.9. The advantage of this new formulation is twofold. On one hand, it only shows the truly efﬁcient portfolios instead of the boundary for the range of portfolios that can be put together, from which the upper part has to be selected. On the other hand, it readily

0.18 0.16 Expected return

0.14 0.12 0.1 0.08 0.06 0.04 0.02 0

0

0.05

0.1

0.15

0.2

Standard deviation

Figure 3.9 Efﬁcient frontier for the reformulated problem

0.25

0.3

0.35

Equities

63

lends itself to generalisation in the event of problems with inequality restrictions, as well as to the simple index models with non-risk titles.

3.2.3.3 Constrained Markowitz model The problem to be solved is formulated here as follows: for the different values of λ between 0 and +∞, minimise with respect to the proportions X1 , . . ., XN the expression

σP2 − λEP =

N N

Xi Xj σij − λ

i=1 j =1

with the restrictions:

N j =1 Xj − Bj ≤ Xj

=1 ≤ Bj+

N

Xj Ej

j =1

j = 1, . . . , N

We will ﬁrst of all introduce the concept of a security’s ‘status’. The security (j ) is deﬁned as ‘down’ (resp. ‘up’) if its proportion is equal to the ‘lower’ (resp. ‘upper’) bound imposed on it: Xj = Bj− (resp. Xj = Bj+ ). For an efﬁcient portfolio (that is, one that minimises the Lagrangian function), the partial derivative of the Lagrangian function with respect to Xj is not zero in an optimum situation; it is strictly positive (resp. strictly negative) as can be seen in Figure 3.10. In the system of equations produced by the partial derivatives of the Lagrangian function, the equations relating to the ‘down’ (resp. ‘up’) securities should therefore be replaced here by Xj = Bj− (resp. Xj = Bj+ ). The other securities are deﬁned as ‘in’, and are such that Bj− < Xj < Bj+ , and in an optimum situation, the partial derivative of the Lagrangian function with respect to Xj is zero. The equations relating to these securities should not be altered. The adaptation to the system of equations produced by the partial derivatives of the Lagrangian function MX∗ = λE ∗ + F , will therefore consist of not altering the components that correspond to the ‘in’ securities, and if the security (j ) is ‘down’ or

L

L

–

Bj

+

Bj

Xj

Figure 3.10 ‘Up’ security and ‘down’ security

–

Bj

+

Bj

Xj

64

Asset and Risk Management

‘up’, of altering the j th line of M and the 2σ12 · · · 2σ1j · · · 2σ1N .. .. .. .. . . . . 0 · · · 1 · · · 0 M= . .. .. .. . ··· . 2σ · · · 2σNj · · · 2σN2 N1 1 ··· 1 ··· 1

j th component of E ∗ and F , as follows: 0 1 E1 .. .. .. . . . ± B 0 ∗ j 0 E F = = . .. .. .. . . 0 EN 1 0 0 1

With this alteration, in fact, the j th equation becomes Xj = Bj± . In addition, when considering the j th line of the equality M −1 M = I , it is evident that M −1 has the same jth line as M and the j th component of the solution X∗ = λ(M −1 E ∗ ) + (M −1 F ). This is also written as Xj = Bj± . If (j ) has an ‘in’ status, this j th component can of course be written as Xj = λuj + vj , a quantity that is strictly included between Bj− and Bj+ . The method proceeds through a series of stages and we will note M0 , E0∗ and F0 , the matrix elements as deﬁned in the ‘unconstrained’ case. The index develops from one stage to the next. The method begins with the major values for λ (+∞ ideally). As we are looking to minimise σP2 − λEP , EP needs to be as high as possible, and this is consistent with a major value for the risk parameter λ. The ﬁrst portfolio will therefore consist of the securities that offer the highest expected returns, in equal proportions to the upper bounds Bj+ , until (with securities in proportions equal to Bj− ) the sum of the proportions equals 1.25 This portfolio is known as the ﬁrst corner portfolio. At least one security will therefore be ‘up’; one will be ‘in’, and the others will be ‘down’. The matrix M and the vectors E ∗ and F are altered as shown above. This brings us to M1 , E1∗ and F1 , and we calculate: X∗ = λ (M1 −1 E1∗ ) + (M1 −1 F1 ). The parameter λ is thus decreased until one of the securities changes its status.26 This ﬁrst change will occur for a value of λ equal to λc(1) , known as the ﬁrst critical λ. To determine this critical value, and the security that will change its status, each of the various securities for which a potentially critical λj is deﬁned will be examined. A ‘down’ or ‘up’ security (j ) will change its status if the equation corresponding to it becomes LXj = 0, that is: 2

N

Xk σj k − λj Ej + m = 0

k=1

This is none other than the j th component of the equation M0 X∗ = λE0∗ + F0 , in which the different Xk and m are given by the values obtained by X∗ = λ (M1 −1 E1∗ ) + (M1 −1 F1 ). 25 If the inequality restrictions are simply 0 ≤ Xj ≤ 1 ∀j (absence of short sales), the ﬁrst portfolio will consist only of the security with the highest expected return. 26 For the restrictions 0 ≤ Xj ≤ 1 ∀j , the ﬁrst corner portfolio consists of a single ‘up’ security, all the others being ‘down’. The ﬁrst change of status will be a transition to ‘in’ of the security that was ‘up’ and of one of the securities that were ‘down’. In this case, on one hand the matrix elements M1 , E1∗ and F1 are obtained by making the alteration required for the ‘down’ securities but for the one that it is known will pass to ‘in’ status, and on the other hand there is no equation for determining the potential critical λ for this security.

Equities

65

For an ‘in’ security (j ), it is known that Xj = λj uj + vj and it will change its status if it becomes a ‘down’ (uj > 0 as λ decreases) or ‘up’ security (uj < 0), in which case we have Bj± = λj uj + vj . This is none other than the j th component of the relation X∗ = λ(M1−1 E1∗ ) + (M1−1 F1 ), in which the left member is replaced by the lower or upper bound depending on the case. We therefore obtain N equations for N values of potentially critical λj . The highest of these is the ﬁrst critical λj or λc(1) . The proportions of the various securities have not changed between λ = +∞ and λ = λc(1) . The corresponding portfolio is therefore always the ﬁrst corner portfolio. The security corresponding to this critical λ therefore changes its status, thus allowing M2 , E2∗ and F2 to be constructed and the second critical λ, λc(2) , to be determined together with all the portfolios that correspond to the values of λ between λc(1) and λc(2) . The portfolio corresponding to λc(2) is of course the second corner portfolio. The process is then repeated until all the potentially critical λ values are negative, in which case the last critical λ is equal to 0. The last and least risky corner portfolio, located at the extreme left point of the efﬁcient frontier, corresponds to this value. The corner portfolios are of course situated on the efﬁcient frontier. Between two consecutive corner portfolios, the status of the securities does not change; only the proportions change. These proportions are calculated, between λc(k−1) and λ(k) c , using the relation X∗ = λ(Mk−1 Ek∗ ) + (Mk−1 Fk ). The various sections of curve thus constructed are connected continuously and with same derivative27 and make up the efﬁcient frontier. Example Let us take the same data as were processed before: E1 = 0.05 σ1 = 0.10 ρ12 = 0.3

E2 = 0.08 σ2 = 0.12 ρ13 = 0.1

E3 = 0.10 σ3 = 0.15 ρ23 = 0.4

Let us impose the requirement of absence of short sales: 0 ≤ Xj ≤ 1 (j = 1, 2, 3). We have the following basic matrix elements: 0.0200 0.0072 0.0030 1 0.05 . 0.0072 0.0288 0.0144 1 0.08 ∗ . M0 = 0.0030 0.0144 0.0450 1 E0 = 0.10 F0 = . 1 1 1 1 . . The ﬁrst corner portfolio consists only of security (3), the one with the highest expected return. As securities (1) and (2) are ‘down’, we construct: 1 . . . . . . . . 1 . . ∗ M1 = 0.0030 0.0144 0.0450 1 E1 = 0.10 F1 = . 1 1 1 . . 1 27

That is, with the same tangent.

66

Asset and Risk Management

We have: M1−1

1 . . 1 = −1 −1 0.0420 0.0306

. . . . . 1 1 −0.0450

and therefore . . . . X∗ = λ(M1−1 E1∗ ) + (M1−1 F1 ) = λ . + 1 −0.045 0.1

The ﬁrst two components of M0 X∗ = λE0∗ and F0 , with the vector X∗ obtained above, give: 0.003 + (0.1 λ1 − 0.045) = 0.05 λ λ1 0.0144 + (0.1 λ2 − 0.045) = 0.08 λ2 This will give the two potential critical λ values: λ1 = 0.84 and λ2 = 1.53. The ﬁrst critical λ is therefore λc(1) = 1.53 and security (2) becomes ‘in’ together with (3), while (1) remains ‘down’. We can therefore construct: 1 . . . . . 0.0072 0.0288 0.0144 1 0.08 . ∗ E2 = F2 = M2 = 0.0030 0.0144 0.0450 1 0.10 . 1 1 1 . . 1 This successively gives: 1 . . . −0.7733 22.22 −22.22 0.68 M2−1 = −0.2267 −22.22 22.22 0.32 0.0183 0.68 0.32 −0.0242 . . −0.4444 0.68 X∗ = λ(M2−1 E2∗ ) + (M2−1 F2 ) = λ 0.4444 + 0.32 −0.0242 0.0864 The ﬁrst component of M0 X∗ = λE0∗ + F0 , with vector X∗ obtained above, gives: 0.0072 · (−0.4444λ1 + 0.68) + 0.0030 · (0.4444λ1 + 0.32) + (0.0864λ1 − 0.0242) = 0.05λ1 . This produces a potential critical λ of λ1 = 0.5312. The second and third components of the relation X∗ = λ(M2−1 E2∗ ) + (M2−1 F2 ), in which the left member is replaced by the suitable bound, produce −0.4444 λ2 + 0.68 = 1 0.4444 λ3 + 0.32 = 0

Equities

67

In consequence, λ2 = λ3 = −0.7201. The second critical λ is therefore λc(2) = 0.5312 and the three securities acquire an ‘in’ status. The matrix elements M3 , E3∗ and F3 are therefore the same as those in the base and the problem can be approached without restriction. We therefore have: 31.16 −24.10 −7.06 0.57 −24.10 40.86 −16.76 0.24 M3−1 = − 7.06 −16.76 23.82 0.19 0.57 0.24 0.19 −0.01 and therefore 0.5714 −1.0762 0.3878 0.2381 X∗ = λ(M3−1 E3∗ ) + (M3−1 F3 ) = λ 0.6884 + 0.1905 −0.0137 0.0667

With suitable bounds, the ﬁrst three components of this give: −1.0762 λ1 etc. We therefore arrive at λ1 = −0.3983, λ2 = −0.6140 and λ3 = −0.2767. The last critical λ is therefore λc(3) = 0 and the three securities retain their ‘in’ status until the end of the process.28 The various portfolios on the efﬁcient frontier, as well as the expected return and the risk, are shown in Table 3.5. Of course, between λ = 0.5312 and λ = 0, the proportions obtained here are the same as those obtained in the ‘unrestricted’ model as all the securities are ‘in’. The efﬁcient frontier graph therefore takes the form shown in Figure 3.11. Table 3.5 Solution for constrained Markowitz model λ 1.53 1.5 1.4 1.3 1.2 1.1 1.0 0.9 0.8 0.7 0.6 0.5312 0.5 0.4 0.3 0.2 0.1 0.0 28

X1

X2

X3

EP

σP

0 0 0 0 0 0 0 0 0 0 0 0 0.0333 0.1409 0.2486 0.3562 0.4638 0.5714

0 0.0133 0.0578 0.1022 0.1467 0.1911 0.2356 0.2800 0.3244 0.3689 0.4133 0.4439 0.4320 0.3932 0.3544 0.3157 0.2769 0.2381

1 0.9867 0.9422 0.8978 0.8533 0.8089 0.7644 0.7200 0.6756 0.6311 0.5867 0.5561 0.5347 0.4658 0.3970 0.3282 0.2853 0.1905

0.1000 0.0997 0.0988 0.0980 0.0971 0.0962 0.0953 0.0944 0.0935 0.0926 0.0917 0.0911 0.0897 0.0851 0.0805 0.0759 0.0713 0.0687

0.1500 0.1486 0.1442 0.1400 0.1360 0.1322 0.1286 0.1253 0.1222 0.1195 0.1170 0.1155 0.1123 0.1027 0.0945 0.0882 0.0842 0.0828

It is quite logical to have signiﬁcant diversiﬁcation in the least risk-efﬁcient portfolio.

68

Asset and Risk Management 0.12

Expected return

0.1 0.08 0.06 0.04 0.02 0

0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

Standard deviation

Figure 3.11 Efﬁcient frontier for the constrained Markowitz model 0.14

Expected return

0.12 0.1 0.08 0.06 0.04 0.02 0

0

0.05

0.1

0.15

0.2

0.25

Standard deviation

Figure 3.12 Comparison of unconstrained and constrained efﬁcient frontiers

Figure 3.12 superimposes the two efﬁcient frontiers (constrained and unconstrained). The zones corresponding to the short sales, and those in which all the securities are ‘in’, can be clearly seen. 3.2.3.4 Critical line algorithm H. Markowitz has proposed an algorithmic method for resolving the problem with the restrictions Xj ≥ 0 (j = 1, . . . , N ). It is known as the critical line algorithm. This algorithm starts with the ﬁrst corner portfolio, which of course consists of the single security with the highest expected return. It then passes through the successive corner portfolios by testing, at each stage, the changes in the function to be minimised when: • A new security is introduced into the portfolio. • A security is taken out of the portfolio. • A security in the portfolio is replaced by one that was not previously present. The development of the algorithm is outside the scope of this work and is instead covered in specialist literature.29 Here, we will simply show the route taken by a three-security problem 29

For example Markowitz, H., Mean Variance Analysis in Portfolio Choice and Capital Markets, Basil Blackwell, 1987.

Equities

69

X3 A

C

B

X2

X1

Figure 3.13 Critical line

3

j =1 Xj = 1 j = 1, 2, 3 0 ≤ Xj ≤ 1 deﬁne, in a three-dimensional space, a triangle with points referenced (1, 0, 0) (0, 1, 0) and (0, 0, 1) as shown in Figure 3.13. The critical line is represented in bold and points AB and C correspond to the corner portfolios obtained for λ = λc(1) , λc(2) and λc(3) respectively. In this algorithm, only the corner portfolios are determined. Those that are located between two consecutive corner portfolios are estimated as linear combinations of the corner portfolios.

such as the one illustrated in this section. The restrictions

3.2.4 Sharpe’s simple index model 3.2.4.1 Principles Determining the efﬁcient frontier within the Markowitz model is not an easy process. In addition, the amount of data required is substantial as the variance–covariance matrix is needed. For this reason, W. Sharpe30 has proposed a simpliﬁed version of Markowitz’s model based on the following two hypotheses. 1. The returns of the various securities are expressed as ﬁrst-degree functions of the return of a market-representative index: Rj t = aj + bj RI t + εj t j = 1, . . . , N . It is also assumed that the residuals verify the classical hypotheses of linear regression,31 which are, among others, that the residuals have zero expectation and are not correlated to the explanatory variable RI t . 2. The residuals for the regressions relative to the various securities are not correlated: cov (εit , εj t ) = 0 for all different i and j . By applying the convention of omitting the index t, the return on a portfolio will therefore be written, in this case, as RP =

N

Xj Rj

j =1 30 31

Sharpe W., A simpliﬁed model for portfolio analysis, Management Science, Vol. 9, No. 1, 1963, pp. 277–93. See Appendix 3 on this subject.

70

Asset and Risk Management

=

N

Xj (aj + bj RI + εj )

j =1

=

N

Xj aj +

j =1

=

N

N

Xj bj RI +

j =1

N

Xj εj

j =1

Xj aj + Y RI +

j =1

N

Xj εj

j =1

where we have inserted Y = Xj bj . The expected return and portfolio variance can, on the basis of the hypotheses in the model, be written EP =

N

Xj aj + Y EI

j =1

σP2 =

N j =1

Xj2 σε2j + Y 2 σI2

Note 1 The variance of the portfolio can be written as a matrix using a quadratic form: σP2 = ( X1

···

XN

Y )

σε21

.

..

. σε2N

X1 . .. XN Y σI2 .

Because of the structure of this matrix, the simple index model is also known as a diagonal model. However, contrary to the impression the term may give, the simpliﬁcation is not excessive. It is not assumed that the returns from the various securities will not be correlated, as σij = cov(ai + bi RI + εi , aj + bj RI + εj ) = bi bj σI2 Note 2 In practice, the aj and bj coefﬁcients for the various regressions are estimated using the least squares method: aˆ j and bˆj . The residuals are estimated using the relation εˆ j t = Rj t − (aˆ j + bˆj RI t )

Equities

71

On the basis of these estimations, the residual variances will be determined using their ergodic estimator. 3.2.4.2 Simple index model We therefore have to resolve the following problem: for the different values of λ between 0 and +∞, minimise the following expression with respect to the proportions X1 , . . . , XN and the variable Y : N N σP2 − λEP = Xj2 σε2j + Y 2 σI2 − λ · Xj aj + Y EI j =1

j =1

N Xj bj = Y

with the restrictions

j =1

N Xj = 1 j =1

The Lagrangian function for the problem is written as: L(X1 , . . . , XN , Y ; m1 , m2 ) =

N j =1

Xj2 σε2j + Y 2 σI2 − λ ·

+ m1 ·

N j =1

N j =1

Xj aj + Y EI

Xj bj − Y + m2 ·

N

Xj − 1

j =1

Calculation of the partial derivatives of this lagrangian function leads to the equality MX∗ = λE ∗ + F , where we have: 2 2σε1 . . b1 1 X1 . .. .. .. .. .. . . . . 2 . . bN 1 2σεN X∗ = XN M= . 2 ··· . 2σI −1 . Y b m1 · · · bN −1 . . 1 m2 1 ··· 1 . . . . a1 . . .. .. aN ∗ F =. E = . EI . . 1 .

72

Asset and Risk Management

The solution for this system is written as: X∗ = λ(M −1 E ∗ ) + (M −1 F ). Example Let us take the same data as those used in the ﬁrst formulation, namely:32 E1 = 0.05 σ1 = 0.10 ρ12 = 0.3

E2 = 0.08 σ2 = 0.12 ρ13 = 0.1

E3 = 0.10 σ3 = 0.15 ρ23 = 0.4

Let us then suppose that the regression relations given by: R1 = 0.014 + 0.60RI R2 = −0.020 + 1.08RI R3 = 0.200 + 1.32RI

and the estimated residual variances are (σε21 = 0.0060) (σε22 = 0.0040) (σε23 = 0.0012)

Let us also suppose that the expected return and index variance represent respectively EI = 0.04 and σI2 = 0.0045. These data allow us to write: 0.0120 . . . 0.60 1 0.0080 . . 1.08 1 . . 0.0024 . 1.32 1 . M= . . 0.0090 −1 . . 0.60 1.08 1.32 −1 . . 1 1 1 . . . . 0.014 . −0.020 . 0.200 F = . E∗ = . 0.040 . . 1 . We can therefore calculate:

−7.46 −18.32 25.79 M −1 E ∗ = 9.77 0.05 0.07

0.513 0.295 0.192 M −1 F = 0.880 0.008 −0.011

The portfolios for the different values of λ are shown in Table 3.6. The efﬁcient frontier is represented in Figure 3.14. We should point out that although the efﬁcient frontier has the same appearance as in Markowitz’s model, there is no need to compare the proportions here as the regression equations that have been relied upon are arbitrary and do not arise from an effective analysis of the relation between the returns on securities and the returns on the index. 32 These values are clearly not necessary to determine the proportions using Sharpe’s model (in addition, one reason for this was to avoid the need to calculate the variance–covariance matrix). We will use them here only to calculate the efﬁcient frontier.

Equities Table 3.6 λ 0.100 0.095 0.090 0.085 0.080 0.075 0.070 0.065 0.060 0.055 0.050 0.045 0.040 0.035 0.030 0.025 0.020 0.015 0.010 0.005 0.000

73

Solution for Sharpe’s simple index model X1

X2

X3

−0.2332 −0.1958 −0.1585 −0.1212 −0.0839 −0.0465 −0.0092 0.0281 0.0654 0.1028 0.1401 0.1774 0.2147 0.2521 0.2894 0.3267 0.3640 0.4014 0.4387 0.4760 0.5133

−1.5375 −1.4458 −1.3542 −1.2626 −1.1710 −1.0793 −0.9877 −0.8961 −0.8045 −0.7129 −0.6212 −0.5296 −0.4380 −0.3464 −0.2547 −0.1631 −0.0715 0.0201 0.1118 0.2034 0.2950

2.7706 2.6417 2.5127 2.3838 2.2548 2.1259 1.9969 1.8680 1.7390 1.6101 1.4812 1.3522 1.2233 1.0943 0.9654 0.8364 0.7075 0.5785 0.4496 0.3206 0.1917

EP 0.1424 0.1387 0.1350 0.1313 0.1276 0.1239 0.1202 0.1165 0.1128 0.1091 0.1054 0.1017 0.0980 0.0943 0.0906 0.0869 0.0832 0.0795 0.0758 0.0721 0.0684

σP 0.3829 0.3647 0.3465 0.3284 0.3104 0.2924 0.2746 0.2568 0.2392 0.2218 0.2046 0.1877 0.1711 0.1551 0.1397 0.1252 0.1119 0.1003 0.0912 0.0853 0.0832

0.16 0.14 Expected return

0.12 0.1 0.08 0.06 0.04 0.02 0

0

0.05

0.1

0.15

0.2 0.25 0.3 Standard deviation

0.35

0.4

0.45

Figure 3.14 Efﬁcient frontier for Sharpe’s simple index model

Note 1 The saving in data required, compared to Markowitz’s model, is considerable: in the last model the expected returns, variances and covariances (two by two) corresponds N (N − 1) N (N + 3) to N + N + = , while in the simple index model we need only 2 2 the regression coefﬁcients and residual variances as well as the expected return and the variance in the index, namely: 2N + N + 2 = 3N + 2. For example, on a market on which there is a choice between 100 securities, the number of items of information required is 5150 in the ﬁrst case and just 302 in the second. Note 2 If, in addition to the restrictions envisaged above, the inequality restrictions Bj− ≤ Xj ≤ Bj+ j = 1, . . . , N are imposed, the simple model index can still be used by applying the same

74

Asset and Risk Management

principles as for Markowitz’s model (alteration of matrix elements according to the ‘down’, ‘in’ and ‘up’ status of the various securities, calculation of critical λ and corner portfolios). 3.2.4.3 Multi-index model One criticism that can be made of the simple index model is that the behaviour of every security is made according to just one index. Probably a more consistent way of proceeding is to divide all the market securities into sectors and express the return on each security in the same sector as a ﬁrst-degree function of the return on a sectorial index. The general method for writing this model is heavy and complex. We will be showing it in relation to two sectors, the ﬁrst corresponding to securities j = 1, . . . , N1 and the second to j = N1 + 1, . . . , N1 + N2 = N . The sectorial indices will be noted as I1 and I2 respectively. The regression equations take the form: Rj t = aj + bj RI1 t + εj t j = 1, . . . , N1 Rj t = aj + bj RI2 t + εj t j = N1 + 1, . . . , N1 + N2 = N The return on the portfolio, and its expected return and variance, are shown as RP =

N

Xj Rj

j =1

=

N

Xj aj + Y1 RI1 + Y2 RI2 +

j =1

EP =

N

N

Xj εj

j =1

Xj aj + Y1 EI1 + Y2 EI2

j =1

σP2 =

N j =1

Xj2 σε2j + Y12 σI21 + Y22 σI22 + 2Y1 Y2 σI1 I2

= ( X1

···

XN

Y1

Y2 )

σε21

..

. σε2N

. Here, we have introduced the parameters N1 = Xj bj Y 1 j =1

N Y = Xj bj 2 j =N1 +1

X1 . . . XN σI1 I2 Y1 Y2 σI2 .

σI21 σI2 I1

2

Equities

The usual reasoning leads once again into X∗ = λ(M −1 E ∗ ) + (M −1 F ), with 2 2σε1 .. . 2σε2N 1 2σε2N +1 1 M= . . ··· . . . · · · . . b1 · · · bN . 1 . ··· . bN1 +1 1 ··· 1 1 X1 a1 . . .. .. X a N N X ∗ = Y1 E ∗ = EI1 Y2 EI 2 m1 . m2 . . m3

75

to the relation: MX∗ = λE ∗ + F . This resolves the notations: . . . b1 . 1 .. .. .. .. .. . . . . . . . bN1 . 1 . . . bN1 +1 1 .. .. .. .. .. .. . . . . . . 2 . . . bN 1 2σεN ··· . 2σI21 2σI1 I2 −1 . . . −1 . ··· . 2σI2 I1 2σI22 ··· . −1 . . . . · · · bN . −1 . . . ··· 1 . . . . . . . .. . F =. . . . 1

It should be noted that compared to the simple index model, the two-index model requires only three additional items of information: expected return, variance and covariance for the second index. 3.2.5 Model with risk-free security 3.2.5.1 Modelling and resolution Let us now examine the case in which the portfolio consists of a certain number N of equities (of returns R1 , . . . , RN ) in proportions X1 , . . . , XN and a risk-free security with a return of RF that is in proportion XN+1 with X1 + . . . + XN + XN+1 = 1. This risk-free security is seen as a hypothesis formulated as follows. The investor has the possibility of investing or loaning (XN+1 > 0) or of borrowing (XN+1 < 0) funds at the same rate RF . Alongside the returns on equities, which are the random variables that we looked at in previous paragraphs (with their expected returns Ej and their variance–covariance matrix V ), the return on the risk-free security is a degenerated random variable: EN+1 = RF 2 σN+1 =0 σj,N+1 = 0 (j = 1, . . . , N )

76

Asset and Risk Management

Note We will now study the effect of the presence of a risk-free security in the portfolio on the basis of Markowitz’s model without inequality restriction. We can easily adapt the presentation to cover Sharpe’s model, or take account of the inequality restrictions. The result in relation to the shape of the efﬁciency curve (see below) is valid in all cases and only one presentation is necessary. The return on the portfolio is written as RP = X1 R1 + . . . + XN RN + XN+1 RF . This allows the expected return and variance to be calculated: N = Xj Ej + XN+1 RF E P j =1

N N 2 σ = Xi Xj σij P i=1 j =1

We must therefore solve the problem, for the different values of λ between 0 and +∞, of minimisation with respect to the proportions X1 , . . . , XN and XN+1 of the expression σP2 − λEP , under the restriction: N

Xj + XN+1 = 1

j =1

The Lagrangian function for this problem can be written as N N N Xi Xj σij − λ · Xj Ej + XN+1 RF L(X1 , . . . , XN , XN+1 ; m) = i=1 j =1

+m·

j =1 N

Xj + XN+1 − 1

j =1

Calculation of its partial derivatives leads to the system of equations MX∗ = λE ∗ + F , where we have: 2σ12 2σ12 · · · 2σ1N . 1 X1 2σ21 2σ 2 · · · 2σ2N . 1 X2 2 . . .. .. .. .. .. . .. ∗ . . . . . X = . M= 2σ 2 N1 2σN2 · · · 2σN . 1 XN . XN+1 . ··· . . 1 m 1 1 ··· 1 1 . E1 . E2 . . . . F = E∗ = . . EN . RF 1 .

Equities

77

The solution for this system is of course written as: X∗ = λ(M −1 E ∗ ) + (M −1 F ). Example Let us take the same data as those used in the ﬁrst formulation, namely: E1 = 0.05 σ1 = 0.10 ρ12 = 0.3

E2 = 0.08 σ2 = 0.12 ρ13 = 0.1

E3 = 0.10 σ3 = 0.15 ρ23 = 0.4

Let us suppose that the risk-free interest rate is RF = 0.03. We therefore have: 0.0200 0.0072 0.0030 . 1 0.05 0.0072 0.0288 0.0144 . 1 0.08 E ∗ = 0.10 M = 0.0030 0.0144 0.0450 . 1 . . . . 1 0.03 . 1 1 1 1 . and therefore:

0.452 1.024 M −1 E ∗ = 1.198 −2.674 0.030

. . F =. . 1

. . M −1 F = . 1 .

This leads to the portfolios shown in Table 3.7. Table 3.7 Solution for model with risk-free security λ 2.0 1.9 1.8 1.7 1.6 1.5 1.4 1.3 1.2 1.1 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0

X1

X2

X3

0.9031 0.8580 0.8128 0.7677 0.7225 0.6774 0.6322 0.5870 0.5419 0.4967 0.4516 0.4064 0.3613 0.3161 0.2709 0.2258 0.1806 0.1355 0.0903 0.0452 0.0000

2.0488 1.9464 1.8439 1.7415 1.6390 1.5366 1.4342 1.3317 1.2293 1.1268 1.0244 0.9220 0.8195 0.7171 0.6146 0.5122 0.4098 0.3073 0.2049 0.1024 0.0000

2.3953 2.2755 2.1558 2.0360 1.9162 1.7965 1.6767 1.5569 1.4372 1.3174 1.1976 1.0779 0.9581 0.8384 0.7186 0.5988 0.4791 0.3593 0.2395 0.1198 0.0000

X(RF ) −4.3472 −4.0799 −3.8125 −3.5451 −3.2778 −3.0104 −2.7431 −2.4757 −2.2083 −1.9410 −1.6736 −1.4063 −1.1389 −0.8715 −0.6042 −0.3368 −0.0694 0.1979 0.4653 0.7326 1.0000

EP

σP

0.3182 0.3038 0.2894 0.2749 0.2605 0.2461 0.2317 0.2173 0.2029 0.1885 0.1741 0.1597 0.1453 0.1309 0.1165 0.1020 0.0876 0.0732 0.0588 0.0444 0.0300

0.5368 0.5100 0.4831 0.4563 0.4295 0.4026 0.3758 0.3489 0.3221 0.2952 0.2684 0.2416 0.2147 0.1879 0.1610 0.1342 0.1074 0.0805 0.0537 0.0268 0.0000

78

Asset and Risk Management 0.35

Expected return

0.3 0.25 0.2 0.15 0.1 0.05 0

0

0.1

0.2

0.3 0.4 Standard deviation

0.5

0.6

0.5

0.6

Figure 3.15 Efﬁcient frontier for model with risk-free security 0.35

Expected return

0.3 0.25 0.2 0.15 0.1 0.05 0

0

0.1

0.2

0.3 0.4 Standard deviation

Figure 3.16 Comparison of efﬁcient frontiers with and without risk-free security

The efﬁcient frontier is shown in Figure 3.15. If the efﬁcient frontier obtained above and the frontier obtained using Markowitz’s model (without risk-free security) are superimposed, Figure 3.16 is obtained. 3.2.5.2 Efﬁcient frontier The graphic phenomenon that appears in the previous example is general. In fact, a portfolio consisting of N securities and the risk-free security can be considered to consist of the risk-free security in the proportion X = XN+1 and a portfolio of equities with a proportion of 1 − X, and the return R (of parameters E and σ ). The return of the risk-free security has a zero variance and is not correlated with the equity portfolio. The parameters for the portfolio are given by EP = XRF + (1 − X)E σP2 = (1 − X)2 σ 2 which gives, after X has been eliminated: EP = RF ± σP

E − RF σ

Equities EP

79

(X ≤ 1)

RF

(X ≥ 1) σP

Figure 3.17 Portfolios with risk-free security EP

A

RF σP

Figure 3.18 Efﬁcient frontier with risk-free security present

following that X ≤ 1 or X ≥ 1. The equations for these straight lines show that the portfolios in question are located on two semi-straight lines with the same slope, with the opposite sign (see Figure 3.17). The lower semi-straight line (X ≥ 1) corresponds to a situation in which the portfolio of equities is sold at a short price in order to invest more in the risk-free security. From now on, we will be interested in the upper part. If the efﬁcient frontier consisting only of equities is known, the optimum semi-straight line, which maximises EP for a given σP , is the line located the highest, that is, the tangent on the efﬁcient frontier of the equities (see Figure 3.18). The portfolios located between the vertical axis and the contact point A are characterised by 0 ≤ X ≤ 1, and those beyond A are such that X ≤ 0 (borrowing at rate RF to invest further in contact portfolio A). 3.2.6 The Elton, Gruber and Padberg method of portfolio management The Elton, Gruber and Padberg or EGP method33 was developed34 to supply a quick and coherent solution to the problem of optimising portfolios. Instead of determining 33 Or more precisely, methods; in fact, various models have been developed around a general idea according to the hypotheses laid down. 34 Elton E., Gruber M. and Padberg M., Simple criteria for optimal portfolio selection, Journal of Finance, Vol. XI, No. 5, 1976, pp, 1341–57.

80

Asset and Risk Management

the efﬁcient frontier as in Markowitz’s or Sharpe’s models, this new technique simply determines the portfolio that corresponds to the contact point of the tangent with the efﬁcient frontier, produced by the point (0, RF ). 3.2.6.1 Hypotheses The method now being examined assumes that: • The mean–variance approach is relevant, which will allow a certain number of results from Markowitz’s theory to be used. • There is a risk-free asset with a return indicated as RF . Alongside these general hypotheses, Elton, Gruber and Padberg have developed resolution algorithms in two speciﬁc cases: • Constant correlations. In this ﬁrst model, it is assumed that the correlation coefﬁcients for the returns on the various securities are all equal: ρij = ρ ∀i, j . • Sharpe’s simple index model can be used. The ﬁrst of these two simpliﬁcations is quite harsh and as such not greatly realistic, and we will instead concentrate on the second case. Remember that it is based on the following two conditions. 1. The returns on the various securities are expressed as ﬁrst-degree functions of the return on a market-representative index: Rj t = aj + bj RI t = εj t . j = 1, . . ., N . It is also assumed that the residuals verify the classic hypotheses of linear regression, including the hypothesis that the residuals have zero-expected return and are not correlated with the explanatory variable Rit . 2. The residuals of the regressions relative to the various securities are not correlated: cov (εit , εj t ) = 0 for all the different i and j values. 3.2.6.2 Resolution of case in which short sales are authorised First of all, we will carry out a detailed analysis of a case in which the proportions are not subject to inequality restrictions. Here, the reasoning is more straightforward35 than in cases where short sales are prohibited. Nevertheless, as will be seen (but without demonstration), applying the algorithm is scarcely any more complex in the second case. If one considers a portfolio P consisting solely of equities in proportions X1 , X2 , . . ., XN , the full range of portfolios consisting partly of P and partly of risk-free securities Elton E., Gruber M. and Padberg M., Optimal portfolios from simple ranking devices, Journal of Portfolio Management, Vol. 4, No. 3, 1978, pp. 15–19. Elton E., Gruber M. and Padberg M., Simple criteria for optimal portfolio selection; tracing out the efﬁcient frontier, Journal of Finance, Vol. XIII No. 1, 1978, pp. 296–302. Elton E., Gruber M. and Padberg M., Simple criteria for optimal portfolio selection with upper bounds, Operation Research, 1978. Readers are also advised to read Elton E. and Gruber M., Modern Portfolio Theory and Investment Analysis, John Wiley & Sons, Inc, 1991. 35 In addition, it starts in the same way as the demonstration of the CAPM equation (see §3.3.1).

Equities

81

EP

A

P

RF σP

Figure 3.19 EGP method

RF shall make up the straight line linking the points (0, RF ) and (σP , EP ) as illustrated in Figure 3.19. EP − RF , which may be The slope of the straight line in question is given by P = σP interpreted as a risk premium, as will be seen in Section 3.3.1. According to the reasoning set out in the previous paragraph, the ideal portfolio P corresponds to the contact point A of the tangent to the efﬁcient frontier coming from the point (0, RF ) for which the slope is the maximum. We are therefore looking for proportions that maximise the slope P or, which amounts to the same thing, maximise P2 . Such as: N N N Xj Ej − Xj RF = Xj (Ej − RF ) EP − RF = j =1

j =1

N N 2 Xi Xj σij σP =

j =1

i=1 j =1

the derivative of:

P2 =

(EP − RF ) = σP2 2

N

2 Xj (Ej − RF )

j =1 N N

Xi Xj σij

i=1 j =1

with respect to Xk is given by: 2 N N N 2 Xj (Ej − RF ) (Ek − RF ) · σP2 − Xj (Ej − RF ) · 2 Xj σkj

( P2 )Xk =

j =1

j =1

σP4 2(EP − RF )(Ek − RF )σP2 − 2(EP − RF )2

=

N j =1

σP4

Xj σkj

j =1

82

Asset and Risk Management

=

N

2(EP − RF ) · (Ek − RF ) − γ · Xj σkj σP2 j =1

In which we have provisionally γ = (Ep − RF )/σP2 . This derivative will be zero if: Ek − RF = γ ·

N

Xj σkj

j =1

By introducing Zj = γ · Xj (j = 1, . . ., N ), the system to be resolved with respect to Z1 , . . ., ZN is therefore Ek − RF =

N

Zj σkj

k = 1, . . . , N

j =1

Before proceeding with the resolution, note that ﬁnding the Zk quantities allows the Xk quantities to be found, as Xk =

Zk = γ

Zk γ·

N

= Xj

j =1

Zk N

Zj

j =1

The hypotheses from Sharpe’s model allow the following to be written: σkj = cov(ak + bk RI + εk , aj + bj RI + εj ) 2 σεk si j = k = bk bj σI2 + 0 si j = k The k th equation in the system can then be written: N Zj bj σI2 + Zk σε2 Ek − RF = bk j =1

k

or also, by resolving with respect to Zk : N 1 Zj bj σI2 Zk = 2 (Ek − RF ) − bk σε k j =1 N bk Zj bj σI2 = 2 θk − σε k j =1

where we have: θk =

Ek − RF bk

Equities

83

All that now remains is to determine the sum between the brackets. On the basis of the last result, we ﬁnd: N N N bk2 σI2 Zk bk = − Z b θ k j j σ2 k=1 k=1 εk j =1 N N N b2 bk2 k = θ − Zj bj σI2 2 k 2 σ σ k=1 εk k=1 εk j =1 the resolution of which gives N bk2 θk N σ2 k=1 εk Zj bj = N b2 j =1 k σI2 1+ 2 σ k=1 εk

By introducing the new notation N bk2 θk N σε2k k=1 Zj bj · σI2 = · σI2 φ= N 2 b j =1 k σI2 1+ 2 σ k=1 εk

and by substituting the sum just calculated within the expression of Zk , we ﬁnd Zk =

bk (θk − φ) k = 1, . . . , N σε2k

Example Let us take the same data as those used in the simple index model (only essential data mentioned here). E1 = 0.05

E2 = 0.08

E3 = 0.10

with the regression relations and the estimated residual variances: R1 = 0.014 + 0.60RI R2 = −0.020 + 1.08RI R3 = 0.200 + 1.32RI

2 (σε1 = 0.0060) 2 (σε2 = 0.0040) 2 (σε3 = 0.0012)

Assume that the variance of the index is equal to σI2 = 0.0045. Finally, assume also that as for the model with the risk-free security, this last value is RF = 0.03. These data allow calculation of: θ1 = 0.0333

θ2 = 0.0463

θ3 = 0.0530.

84

Asset and Risk Management

Therefore, φ = 0.0457. The Zk values are deduced: Z1 = −1.2327

Z2 = 0.1717

Z3 = 8.1068

The proportions of the optimum portfolio are therefore deduced: X1 = −0.1750

X2 = 0.0244

X3 = 1.1506

3.2.6.3 Resolution of case in which short sales are prohibited Let us now examine cases in which restrictions are introduced. These are less general than those envisaged in Markowitz’s model, and are written simply as 0 ≤ Xj ≤ 1(j = 1, . . . , N ). The method, which we are showing here without supporting calculations, is very similar to that used for cases in which short sales are authorised. As above, the following are calculated: Ek − RF θk = k = 1, . . . , N bk The securities are then sorted in decreasing order of θk and this order is preserved until the end of the algorithm. Instead of having just one parameter φ, one parameter is calculated for each security: k bj2 j =1

φk = 1+

θj

σε2j

k

bj2 σ2 j =1 εj

· σI2

k = 1, . . . , N

σI2

It can be shown that the sequence of φk numbers ﬁrst increases, then passes through a maximum and ﬁnally ends with a decreasing phase. The value K of the k index corresponding to the maximum φk , is noted. The φK number is named the ‘cut-off rate’ and it can be shown that the calculation of the Zk values for the same relation as before (replacing φ by φK ) produces positive values for k = 1, . . . , K and negative values for k = K + 1, . . . , N . Only the ﬁrst K securities are included in the portfolio. The calculations to be made are therefore: bk Zk = 2 (θk − φK ) k = 1, . . . , K σε k This, for the proportions of integrated K securities, gives: Xk =

Zk K

k = 1, . . . , K

Zj

j =1

Example Let us take the same data as above. Of course, we still have: θ1 = 0.0333

θ2 = 0.0463

θ3 = 0.0530

Equities

85

This allows the securities to be classiﬁed in the order (3), (2), (1). We will provisionally renumber the securities in this new order, thus producing: φ1 = 0.04599

φ2 = 0.04604

φ3 = 0.04566

This shows that K = 2 and the cut-off rate is φ2 = 0.04604. The Zk values will therefore be deduced: Z1 = 7.6929 Z2 = 0.0701 The proportions of the optimum portfolio are therefore deduced: X1 = 0.9910

X2 = 0.0090

If one then reverts to the initial order, the securities to be included in the portfolio shall therefore be securities (2) and (3) with the following relative proportions: X2 = 0.0090

X3 = 0.9910

3.2.7 Utility theory and optimal portfolio selection Once the efﬁcient frontier has been determined, the question that faces the investor is that of choosing from all the efﬁcient portfolios the one that best suits him. The portfolio chosen will differ from one investor to another, and the choice made will depend on his attitude and behaviour towards the risk. The efﬁcient frontier, in fact, contains as many prudent portfolios (low expected return and risk, located at the left end of the curve) as more risky portfolios (higher expected return and risk, located towards the right end). 3.2.7.1 Utility function The concept of utility function can be introduced generally36 to represent from an individual person’s viewpoint the utility and interest that he ﬁnds in a project, investment, strategy etc., the elements in question presenting a certain level of risk. The numerical values of this risk function are of little importance, as it is essentially used to compare projects, investments, strategies etc. Here, we will present the theory of utility in the context of its application to a return (which, remember, is random) of, for example, a portfolio of equities. Because of the presence of the risk, it is evident that we cannot be content with taking E(R) as utility of return U (R). This was clearly shown by D. Bernoulli in 1732 through the ‘St Petersburg paradox’. The question is: How much would you be prepared to stake to participate in the next game? I toss a coin a number of times and I give you two $ if tails comes up on the ﬁrst throw, four $ if tails comes up for the ﬁrst time on the second throw, eight $ if tails appears for the ﬁrst time on the third throw, and so on. I will therefore give you 2n $ if tails comes up for the ﬁrst time on the nth throw. Most people would lay down a small sum (at least two $), but would be reluctant to invest more because of the increased risk in the game. A player who put down 20 $ would have a 36 An excellent presentation on the general concepts of behaviour in the face of risk (not necessarily ﬁnancial) and the concept of ‘utility’ is found in Eeckhoudt L. and Gollier C., Risk, Harvester Wheatsheaf, 1995.

86

Asset and Risk Management

probability of losing of 1/2 + 1/4 + 1/8 + 1/16 = 15/16 = 0.9375, and would therefore only win on 6.25 stakes out of every 100. The average gain in the game, however, is ∞ n=1

2n

n 1 = 1 + 1 + 1 + ... = ∞ 2

It is the aversion to the risk that justiﬁes the decision of the player. The aim of the utility function is to represent this attitude. In utility theory, one compares projects, investments, strategies etc. (in our case, returns) through a relation of preference (R1 is preferable to R2 : R1 > R2 ) and a relation of indifference (indifference between R1 and R2 : R1 ∼ R2 ). The behaviour of the investor can be expressed if these two relations obey the following axioms: • (Comparability): The investor can always compare two returns. ∀R1 , R2 . We always have R1 > R2 , R2 < R1 , or R1 ∼ R2 . • (Reﬂexivity): ∀R R ∼ R. • (Transitivity): ∀R1 , R2 , R3 , if R1 > R2 and R2 > R3 , then R1 > R3 . • (Continuity): ∀R1 , R2 , R3 , if R1 > R2 > R3 , there is a single X ∈ [0; 1] such as [X.R1 + (1 − X).R3 ] ∼ R2 . • (Independence): ∀R1 , R2 , R3 and ∀X ∈ [0; 1], if R1 > R2 , then [X.R1 + (1 − X).R3 ] > [X.R2 + (1 − X).R3 ]. Von Neumann and Morgenstern37 have demonstrated a theorem of expected utility, which states that if the preferences of an investor obey the axioms set out above, there is a function U so that ∀R1 , R2 , R1 > R2 ⇔ E[U (R1 )] > E[U (R2 )]. This utility function is clearly a growing function. We have noted that its numerical values are not essential as it is only used to make comparisons of returns. The theorem of expected utility allows this concept to be deﬁned more accurately: if an investor’s preferences are modelled by the utility function U , there will be the same system of preferences based on the function aU + b with a > 0. In fact, if R1 > R2 is expressed as E[U (R1 )] > E[U (R2 )], we have: E[U ∗ (R1 )] = E[aU (R1 ) + b] = aE[U (R1 )] + b > aE[U (R2 )] + b = E[aU (R2 ) + b] = E[U ∗ (R2 )] The utility function is an element that is intrinsically associated with each investor (and is also likely to evolve with time and depending on circumstances). It is not easy or indeed even very useful to know this function. If one wishes to estimate it approximately, one has to deﬁne a list of possible values R1 < R2 < . . . < Rn for the return, and then for i = 2, . . . , n − 1, ask the investor what is the probability of it being indifferent to obtain 37

Von Neumann J. and Morgenstern O., Theory of Games and Economic Behaviour, Princeton University Press, 1947.

Equities

87

a deﬁnite return Ri or play in a lottery that gives returns of R1 and Rn with the respective probabilities (1 − pi ) and pi . If one chooses arbitrarily U (R1 ) = 0 and U (Rn ) = 100, then U (Ri ) = 100 pi (i = 2, . . . , n − 1). 3.2.7.2 Attitude towards risk For most investors, an increase in return of 0.5 % would be of greater interest if the current return is 2 % than if it is 5 %. This type of attitude is called risk aversion. The opposite attitude is known as taste for risk, and the middle line is termed risk neutrality. How do these behaviour patterns show in relation to utility function? Let us examine the case of aversion. Generally, if one wishes to state that the utility of return U (R) must increase with R and give less weight to the same variations in return when the level of return is high, we will have: R1 < R2 ⇒ U (R1 + R) − U (R1 ) > U (R2 + R) − U (R2 ). This shows the decreasing nature of the marginal utility. In this case, the derivative of the utility function is a decreasing function and the second derivative is therefore negative; the utility function is concave. The results obtained from these considerations are summarised in Table 3.8, and a representation of the utility function in the various cases is shown in Figure 3.20. Let us now deﬁne this concept more precisely. We consider an investor who has a choice between a certain return totalling R on one hand and a lottery that gives him a random return that may have two values (R − r) and (R + r), each with a probability of 1/2. If he shows an aversion to risk, the utility of the certain return will exceed the expected utility of the return on the lottery: U (R) > 12 [U (R − r) + U (R + r)] This is shown in graphic form in Figure 3.21. Table 3.8

Attitude to risk

Risk aversion Risk neutrality Taste for risk

U(R)

Marginal utility

U

U

Decreasing Increasing Increasing

0

Concave Linear Convex

Aversion Neutrality Taste

R

Figure 3.20 Utility function

88

Asset and Risk Management U(x) U(R + r) U(R) 1 [U(R – r) + U(R + r)] 2

U(R – r) p

R–r

R′

R

R+r

x

Figure 3.21 Aversion to risk

This ﬁgure shows R , the certain return, for which the utility is equal to the expected return on the lottery. The difference p = R − R represents the price that the investor is willing to pay to avoid having to participate in the lottery; this is known as the risk premium. Taylor expansions for U (R + r), U (R − r) and U(R ) = U(R − p) readily lead to the relation: U (R) r 2 · p=− U (R) 2 The ﬁrst factor in this expression is the absolute risk aversion coefﬁcient: α(R) = −

U (R) U (R)

The two most frequently used examples of the utility function corresponding to the risk aversion are the exponential function and the quadratic function. If U (R) = a.ebR , with a and b < 0, we will have α(R) = −b. If U (R) = aR 2 + bR + c, with a < 0 and b < 0, we of course have to limit ourselves to values for R that do not exceed −b/2a in order for the utility function to remain an increasing function. The absolute risk aversion coefﬁcient is then given by: α(R) =

1 b − −R 2a

When this last form can be accepted for the utility function, we have another justiﬁcation for deﬁning the distribution of returns by the two parameters of mean and variance alone, without adding a normality hypothesis (see Section 3.1.1). In this case, in fact, the expected utility of a return on a portfolio (the quantity that the investor wishes to

Equities

89

optimise) is given as: E[U (RP )] = E[aRP2 + bRP + c] = aE(RP2 ) + bE(RP ) + c = a(σP2 + EP2 ) + bEP + c This quantity then depends on the ﬁrst two moments only. 3.2.7.3 Selection of optimal portfolio Let us now consider an investor who shows an aversion for risk and has to choose a portfolio from those on the efﬁcient frontier. We begin by constructing indifference curves in relation to its utility function, that is, the curves that correspond to the couples (expectation, standard deviation) for which the expected utility of return equals a given value (see Figure 3.22). These indifference curves are of close-ﬁtting convex form, the utility increasing as the curve moves upwards and to the left. By superimposing the indifference curves and the efﬁcient frontier, it is easy to determine the portfolio P that corresponds to the maximum expected utility, as shown in Figure 3.23. E

U=4

U=3

U=2

U=1

σ

Figure 3.22 Indifference curves

E

EP

σP

Figure 3.23 Selection of optimal portfolio

σ

90

Asset and Risk Management

3.2.7.4 Other viewpoints Alongside the efﬁcient portfolio based on the investor preference system, shown through the utility function, other objectives or restrictions can be taken into consideration. Let us examine, for example, the case of deﬁcit constraint. As well as optimising the couple (E, σ ), that is, determining the efﬁcient frontier, and before selecting the portfolio (through the utility function), the return on the portfolio here must not be less than a ﬁxed threshold38 u except with a very low probability p, say: Pr[RP ≤ u] ≤ p. If the hypothesis of normality of return is accepted, we have: Pr that is:

RP − EP u − EP ≤ σP σP

≤p

u − EP ≤ zp σP

Here, zp is the p-quantile of the standard normal distribution (zp < 0 as p is less than 1/2). The condition can thus be written as EP ≥ u − zp .σP . The portfolios that obey the deﬁcit constraint are located above the straight line for the equation EP = u = zp .σP (see Figure 3.24). The portion of the efﬁcient frontier delimited by this straight line of constraint is the range of portfolios from which the investor will make his selection. If p is ﬁxed, an increase of u (higher required return) will cause the straight line of constraint to move upwards. In the same way, if u is ﬁxed, a reduction in p (more security with respect for restriction) will cause the straight line of constraint to move upwards while pivoting about the point (0, u). In both cases, the section of the efﬁcient frontier that obeys the restriction is limited. One can also, by making use of these properties, determine the optimal portfolio on the basis of one of the two criteria by using the straight line tangential to the efﬁcient frontier. E

u σ

Figure 3.24 Deﬁcit constraint 38

If u = 0, this restriction means that except in a low probability event, the capital invested must be at least maintained.

Equities

91

3.2.8 The market model Some developments in the market model now include the reasoning contained in the construction of Sharpe’s model, where the index is replaced by the market in its totality. This model, however, contains more of a macroeconomic thought pattern than a search for efﬁcient portfolios. 3.2.8.1 Systematic risk and speciﬁc risk We have already encountered the concept of a systematic security risk in Section 3.1.1: βj =

σj M 2 σM

This measures the magnitude of the risk of the security (j ) in comparison to the risk of the average security on the market. It appears as a regression coefﬁcient when the return on this security is expressed as a linear function of the market return: Rj t = αj + βj RMt + εj t . It is, of course, supposed that the residuals verify the classical hypotheses of the linear regression, establishing among other things that the residuals are of zero expectation and constant variance and are not correlated with the explanatory variable RMt . Alongside the systematic risk βj , which is the same for every period, another source of ﬂuctuation in Rj is the residual εj t , which is speciﬁc to the period t. The term speciﬁc risk is given to the variance in the residuals: σε2j = var(εj t ). Note In practice, the coefﬁcients αj and βj for the regression are estimated using the least square 2 method. For example: βˆj = sj M /sM . The residuals are then estimated by εˆ j t = Rj t − 1 T 2 ˆ ˆj t . (αˆ j + βj RMt ) and the speciﬁc risk is estimated using its ergodic estimator t=1 ε T In the rest of this paragraph, we will omit the index t relating to time. We will see how the risk σj2 for a security consists of a systematic component and a speciﬁc component. We have: σj2 = var(Rj ) = E[(αj + βj RM + εj − E(αj + βj RM + εj ))2 ] = E[(βj (RM − EM ) + εj )2 ] = βj2 E[(RM − EM )2 ] + E(εj2 ) + 2βj E[(RM − EM )εj ] = βj2 var(RM ) + var(εj ) Hence the announced decomposition relation of: 2 + σε2j σj2 = βj2 σM

92

Asset and Risk Management

3.2.8.2 Portfolio beta By using the regression expression for Rj , RP can be developed easily: RP =

N

Xj Rj

j =1

=

N

Xj (αj + βj RM + εj )

j =1

=

N

Xj αj +

j =1

N

Xj βj RM +

j =1

N

Xj εj

j =1

This shows that as for the portfolio return, the portfolio beta is the average of the betas of all the constituent securities, weighted for the proportions expressed in terms of equity market capitalisation: N βP = Xj βj j =1

3.2.8.3 Link between market model and portfolio diversiﬁcation As for the simple index model, it is supposed here that the regression residuals relative to the various securities are not correlated: cov (εi , εj ) = 0 for i = j . The portfolio risk is written as: N N σP2 = var Xj αj + βP RM + Xj εj j =1 2 = βP2 σM +

j =1 N j =1

Xj2 σε2j

If, to simplify matters, one considers a portfolio consisting of N securities in equal proportions: 1 Xj = j = 1, . . . , N N the portfolio risk can develop as follows: N N σP2 = var Xj αj + βP RM + Xj εj j =1

j =1

2 = βP2 σM +

N 1 2 σ N 2 j =1 εj

2 = βP2 σM +

1 2 σ N ε

Equities

93

Here, the average residual variance has been introduced: σε2 =

N 1 2 σ N j =1 εj

The ﬁrst term of the decomposition is independent of N , while the second tends towards 0 when N becomes very large. This analysis therefore shows that the portfolio risk σP2 can be broken down into two terms: 2 2 • The systematic component βP2σM2 (non-diversiﬁable risk). • The speciﬁc component Xj σεj (diversiﬁable risk).

3.3 MODEL OF FINANCIAL ASSET EQUILIBRIUM AND APPLICATIONS 3.3.1 Capital asset pricing model Unlike the previous models, this model, developed independently by W. Sharpe39 and J. Lintner40 and known as CAPM (MEDAF in French) is interested not in choosing a portfolio for an individual investor but in the behaviour of a whole market when the investors act rationally41 and show an aversion to risk. The aim, in this situation, is to determine the exact value of an equity. 3.3.1.1 Hypotheses The model being examined is based on a certain number of hypotheses. The hypotheses relating to investor behaviour are: • They put together their portfolio using Markowitz’s portfolio theory, that is, relying on the mean–variance pairing. • They all have the same expectations, that is, none of them has any privileged information and they agree on the value of the parameters Ei , σi and σij to be used. Hypotheses can also be laid down with regard to the transactions: • They are made without cost. • The purchase, sale and holding times are the same for all investors. Finally, it is assumed that the following conditions have been veriﬁed in relation to the market: • There is no taxation either on increases in value, dividends or interest income. • There are very many purchasers and sellers on the market and they do not have any inﬂuence on the market other than that exerted by the law of supply and demand. 39

Sharpe W., Capital assets prices, Journal of Finance, Vol. 19, 1964, pp. 435–42. Lintner J., The valuation of risky assets and the selection of risky investments, Review of Economics and Statistics, Vol. 47, 1965, pp. 13–37. 41 That is, according to the portfolio theory based on the mean–variance analysis. 40

94

Asset and Risk Management

• There is a risk-free interest rate, RF , which is used for both borrowings and investments. • The possibilities of borrowing and investing at this rate are not limited in terms of volume. These hypotheses are of course not realistic. However, there are extensions of the model presented here, which make some of the hypotheses formulated more ﬂexible. In addition, even the basic model gives good results, as do the applications that arise from it (see Sections 3.3.3, 3.3.4 and 3.3.5). 3.3.1.2 Separation theorem This theorem states that under the conditions speciﬁed above, all the portfolios held by the investors are, in terms of equilibrium, combinations of a risk-free asset and a market portfolio. According to the hypotheses, all the investors have the same efﬁcient frontier for the equities and the same risk-free rate RF . Therefore, according to the study of Markowitz’s model with the risk-free security (Section 3.2.5), each investor’s portfolio is located on the straight line issuing from point (0, RF ) and tangential to the efﬁcient frontier. This portfolio consists (see Figure 3.25) of: • The risk-free equity, in proportion X. • The portfolio A, corresponding to the tangent contact point, in proportion 1 − X. The risked portfolio A is therefore the same for all investors. The market will therefore, in accordance with the principle of supply and demand, adapt the prices so that the proportions in the portfolio are those of the whole market (A = M) and the portfolios held by the investors are perfectly diversiﬁed. The investor’s choice will therefore be made only on the proportion X of the market portfolio (and therefore the 1 − X proportion of the risk-free equity). If the portfolio chosen is located to the left of the point M (0 < X < 1), we are in fact looking at a combination of the two investments. If it is to the right of M (X > 1), the investor borrows at the rate RF in order to acquire more than 100 % of the market portfolio. The line in question is known as the market straight line. EP

A=M

RF σP

Figure 3.25 Separation theorem and market straight line

Equities

95

Interpretation of the separation theorem is simple. The market straight line passes through the points (0, RF ) and (σM , EM ). Its equation is therefore given by: EP = RF +

EM − RF · σP σM

The expected return EP on a portfolio is equal to the risk-free rate RF plus the risk premium collected by the investor when he agrees to take a risk σP . The coefﬁcient of σP (the slope of the market straight line) is therefore the increase in expected return obtained to support one unit of risk: this is the unit price of the risk on the market. 3.3.1.3 CAPM equation We will now determine a relation very similar to the previous one – that is, a relation between expected return and risk – but in connection with a security instead of a portfolio. For any portfolio of equities B, the straight line that connects the points (0, RF ) and (σB , EB ) has the slope EB − RF B = σB This slope is clearly at its maximum when B = M (see Figure 3.26) and, in the same 2 way, the maximum value of B2 is M . Therefore, if one terms the proportions of the various equities in the market portfolio X1 , X2 , . . ., XN , (Xi = 1) we will have:

2 )Xk = 0 ( M

Like

k = 1, . . . , N

N N N Xj Ej − Xj RF = Xj (Ej − RF ) E − RF = M j =1

j =1

N N 2 σ = Xi Xj σij M

j =1

i=1 j =1

EP

M B

RF σP

Figure 3.26 CAPM

96

Asset and Risk Management

the derivative of

2 = M

(EM − RF ) = 2 σM 2

N

2 Xj (Ej − RF )

j =1 N N

Xi Xj σij

i=1 j =1

with respect to Xk is given by: 2 N N N 2 2 Xj (Ej − RF ) (Ek − RF ) · σM − Xj (Ej − RF ) · 2 Xj σkj

2 ( M )Xk =

j =1

j =1

2 − 2(EM − RF )2 2(EM − RF )(Ek − RF )σM

N

Xj σkj

j =1

= =

j =1

4 σM

4 σM 2 2(EM − RF ) · ((Ek − RF )σM − (EM − RF )σkM ) 4 σM

This will be zero if Ek − RF = (EM − RF )

σkM 2 σM

or Ek = RF + βk .(EM − RF ) This is termed the CAPM equation, which is interpreted in a similar way to the relation in the previous paragraph. The expected return Ek on the security (k) is equal to the risk-free rate RF , plus a risk premium collected by the investor who agrees to take the risk. This risk premium is the increase in the expected return, to which more importance is given as the risk of the security within the market in question increases (βk ). Note As we have said, the hypotheses used as a basis for the model just developed are not realistic. Empirical studies have been carried out in order to determine whether the results obtained from the application of the CAPM model are valid. One of the most detailed analyses is that carried out by Fama and Macbeth,42 which, considering the relation Ek = RF + βk (EM − RF ) as an expression of Ek according to βk , tested the following hypotheses on the New York Stock Exchange (Figure 3.27): • The relation Ek = f (βk ) is linear and increasing. • βk is a complete measurement of the risk of the equity (k) on the market; in other words, the speciﬁc risk σε2k is not a signiﬁcant explanation of Ek . 42 Fama E. and Macbeth J., Risk, return and equilibrium: empirical tests, Journal of Political Economy, Vol. 71, No. 1, 1974, pp. 606–36.

Equities

97

Ek

EM

RF

1

bk

Figure 3.27 CAPM test

To do this, they used generalisations of the equation Ek = f (βk ), including powers of βk of a degree greater than 1 and a term that takes the speciﬁc risk into consideration. Their conclusion is that the CAPM model is in most cases acceptable.

3.3.2 Arbitrage pricing theory In the CAPM model, the risk premium Ek − RF for an equity is expressed as a multiple of the risk premium EM − RF for the market: Ek − RF = βk (EM − RF ) The proportionality coefﬁcient is the β of the security. It can therefore be considered that this approach allows the risk premium for an equity to be expressed on the basis of the risk premium for a single explanatory macroeconomic factor, or, which amounts to the same thing, on the basis of an aggregate that includes all the macroeconomic factors that interact with the market. The arbitrage pricing theory43 or APT allows a more reﬁned analysis of the portfolio than does the CAPM, as breaking down the risk according to the single market factor, namely the beta, may prove insufﬁcient to describe all the risks in a portfolio of equities. Hence the interest in resorting to risk breakdowns on the basis of several factors F1 , F2 , . . ., FP . p αkj (EFj − RF ) Ek − RF = j =1

The APT theory shows that in an efﬁcient market the quoted equity prices will be balanced by successive arbitrages, through the involvement of actors on the market. If one makes a point of watching developments in relative prices, it is possible to extract from the market a small number of arbitrage factors that allow the prices to balance out. This is precisely what the APT model does. 43

Ross S. A., The arbitrage theory of capital asset pricing, Journal of Economic Theory, 1976, pp. 343–62.

98

Asset and Risk Management

The early versions44 of the APT model relied on a previously compiled list of basic factors such as an industrial activity index, the spread between short-term and longterm interest rates, the difference in returns on bonds with very different ratings (see Section 4.2.1) etc. The coefﬁcients αk1 , . . . , αkp are then determined by a regression technique based on historical observations Rkt and RFj ,t (j = 1, . . . , p). The more recent versions are based on more empirical methods that provide factors not correlated by a statistical technique45 (factorial analysis), without the number of factors being known beforehand and even without them having any economic interpretation at all. The factors obtained46 from temporal series of returns on asset prices are purely statistical. Taken individually, they are not variables that are commonly used to describe a portfolio construction process or management strategy. None of them represents an interest, inﬂation or exchange rate. They are the equivalent of an orthogonal axis system in geometry. The sole aim is to obtain a referential that allows a description of the interrelations between the assets studied on a stable basis over time. Once the referential is established, the risk on any asset quoted (equities, bonds, investment funds etc.) is broken down into a systematic part (common to all assets in the market) that can be represented in the factor space, and a speciﬁc part (particular to the asset). The systematic part is subsequently explained by awareness coefﬁcients (αkj ) for the different statistical factors. The explanatory power of the model can be explained by the fact that the different standard variables (economic, sectorial, fundamental etc.) used to understand the way in which it behaves are also represented in the referential for the factors provided an associated quoted support exists (price history). The relation that links the return on a security to the various factors allows a breakdown of its variance into a part linked to the systematic risk factors (the explicative statistical factors) and a part that is speciﬁc to the securities and therefore diversiﬁable (regression residues etc.), that is: p 2 2 αkj var(RFj ) + σε2k σk = j =1

Example A technically developed version of this method, accompanied by software, has been produced by Advanced Portfolio Technologies Inc. It extracts a series of statistical factors (represented by temporal series of crossed returns on assets) from the market, using a form search algorithm. In this way, if the left of Figure 3.28 represents the observed series of returns on four securities, the straight line on the same ﬁgure illustrates the three primary factors that allow reconstruction of the previous four series by linear combination. For example, the ﬁrst series breaks down into: R1 − RF = 1 · (RF1 − RF ) + 1 · (RF2 − RF ) + 0.(RF3 − RF ) + ε1 . 44 Dhrymes P. J., Friend I. and Gultekin N. B., A critical re-examination of the empirical evidence on the arbitrage pricing theory, Journal of Finance, No. 39, 1984, pp. 323–46. Chen N. F., Roll R. and Ross S. A., Economic forces of the stock market, Journal of Business, No. 59, 1986, pp. 383–403. More generally, Grinold C. and Kahn N., Active Portfolio Management, McGraw-Hill, 1998. 45 See for example Saporta G., Probabilities, Data Analysis and Statistics, Technip, 1990; or Morrison D., Multivariate Statistical Methods, McGraw-Hill, 1976. 46 Readers interested in the mathematical developments produced by extracting statistical factors from historical series of returns on assets should read Mehta M. L., Random Matrices, Academic Press, 1996. This work deals in depth with problems of proper values and proper vectors for matrices with very large numbers of elements generated randomly.

Equities Series 4

99

Factor 3

Series 3 Factor 2 Series 2 Series 1 Factor 1

Figure 3.28 Arbitrage pricing theory

3.3.3 Performance evaluation 3.3.3.1 Principle The portfolio manager,47 of course, has an interest in the product that he manages. To do this properly, he will compare the return on his portfolio with the return on the market in which he is investing. From a practical point of view, this comparison will be made in relation to a market representative index for the sector in question. Note The return on a real portfolio between moments s and t is calculated simply using the V t − Vs relation RP ,[s;t] = , provided there has been no movement within the portfolio Vs during the interval of time in question. However, there are general ﬂows (new securities purchased, securities sold etc.). It is therefore advisable to evaluate the return with the effect of these movements eliminated. Note t1 < . . . < tn the periods in which these movements occur and propose that t0 = s and tn+1 = t. The return to be taken into consideration is therefore given by: RP ,]s;t] =

n

(1 + RP ,]tk ;tk+1 [ ) − 1

k=0

Here, it is suggested that RP ,[tk ;tk+1 ] =

− Vt(+) Vt(−) k+1 k Vt(+) k

Vt(−) and Vt(+) represent the value of the portfolio just before and just after the movement j j at moment tj respectively. In Section 3.2, it has been clearly shown that the quality of a security or a portfolio is not measured merely by its return. What should in fact be thought of those portfolios A and B in which the returns for a given period are 6.2 % and 6.3 % respectively but the attendant of B is twice that of A. The performance measurement indices presented below take into account not just the return, but also the risk on the security or portfolio. 47

Management strategies, both active and passive, are dealt with in the following paragraph.

100

Asset and Risk Management

The indicators shown by us here are all based on relations produced by the ﬁnancial asset valuation model and more particularly on the CAPM equation. They therefore assume that the hypotheses underlying this model are satisﬁed. The ﬁrst two indicators are based on the market straight-line equation and the CAPM equation respectively; the third is a variation on the second. 3.3.3.2 Sharpe index The market straight-line equation is: EP = RF +

EM − RF · σP σM

which can be rewritten as follows: EM − RF EP − RF = σP σM This relation expresses that the excess return (compared to the risk-free rate), standardised by the standard deviation, is (in equilibrium) identical to a well-diversiﬁed portfolio and for the market. The term Sharpe index is given to the expression SIP =

EP − RF σP

which in practice is compared to the equivalent expression calculated for a market representative index. Example Let us take the data used for the simple Sharpe index model (Section 3.2.4): E1 = 0.05 σ1 = 0.10 ρ12 = 0.3

E2 = 0.08 σ2 = 0.12 ρ13 = 0.1

E3 = 0.10 σ3 = 0.15 ρ23 = 0.4

Let us then consider the speciﬁc portfolio relative to the value λ = 0.010 for the risk parameter. In this case, we will have X1 = 0.4387, X2 = 0.1118 and X3 = 0.4496, and therefore EP = 0.0758 and σP = 0.0912. We will also have EI = 0.04 and σI = 0.0671, and RF is taken, as in Section 3.2.5, as 0.03. The Sharpe index for the portfolio is therefore given as: SIP =

0.0758 − 0.03 = 0.7982 0.0912

The Sharpe index relative to the index equals: SII =

0.04 − 0.03 = 0.1490 0.0671

This shows that the portfolio in question is performing better than the market.

Equities

101

Although Section 3.3.4 is given over to the portfolio management strategies for equities, some thoughts are also given on the role of the Sharpe index in the taking of investment (and disinvestment) decisions. Suppose that we are in possession of a portfolio P and we are envisaging the purchase of an additional total of equities A, the proportions of P and A being noted respectively as XP and XA . Of course, XP + XA = 1 and XA is positive or negative depending on whether an investment or a disinvestment is involved. The portfolio produced as a result of the decision taken will be noted as P and its return will be given by RP = XP RP + XA RA . The expected return and variance on return for the new portfolio are EP = (1 − XA )EP + XA EA σP2 = (1 − XA )2 σP2 + XA2 σA2 + 2XA (1 − XA )σP σA ρAP We admit as the purchase criterion for A the fact that the Sharpe index for the new portfolio is at least equal to that of the old one: SIP ≥ SIP , which is expressed as: EP − RF (1 − XA )EP + XA EA − RF ≥ σP σP By isolating the expected return on A, we obtain as the condition EA ≥ EP +

σP EP − RF −1 σP XA

It is worth noting that if A does not increase the risk of the portfolio (σP ≤ σP ), it is not even necessary for EA ≥ EP to purchase A. Example Suppose that one has a portfolio for which EP = 0.08, that the risk-free total is RF = 0.03 and that one is envisaging a purchase of A at the rate XA = 0.02. The condition then becomes 5 σP EA ≥ 0.08 + −1 2 σP In the speciﬁc case where the management of risks is such that σA = σP , the ratio of the standard deviations is given by σP = σP =

(1 − XA )2 + XA2 + 2XA (1 − XA )ρAP 0.9608 + 0.0392ρAP

This allows the conditions of investment to be determined according to the correlation coefﬁcient value: if ρAP = −1, 0 or 1, the condition becomes EA ≥ 0.02, EA ≥ 0.0305 and EA ≥ 0.08 respectively.

102

Asset and Risk Management

3.3.3.3 Treynor index The CAPM equation for the k th equity in the portfolio, Ek = RF + βk (EM − RF ), allows the following to be written: N N N Xk Ek = Xk · RF + Xk βk · (EM − RF ) k=1

k=1

k=1

or, EP = RF + βP (EM − RF ) Taking account of the fact that βM = 1, this last relation can be written as: EM − RF EP − RF = βP βM The interpretation is similar to that of the Sharpe index. The Treynor index is therefore deﬁned by: EP − RF TI P = βP which will be compared to the similar expression for an index. Example Let us take the data above, with the addition of (see Section 3.2.4): β1 = 0.60, β2 = 1.08, β3 = 1.32. This will give βP = 0.9774. The Treynor index for this portfolio is therefore obtained by: TI P =

0.0758 − 0.03 = 0.0469 0.9774

meanwhile, the index relative to the index is TI I =

0.04 − 0.03 = 0.0100 1

This will lead to the same conclusion. 3.3.3.4 Jensen index According to the reasoning in the Treynor index, we have EP − RF = βP (EM − RF ). This relation being relative (in equilibrium) for a well-diversiﬁed portfolio, a portfolio P will present an excess of return in relation to the market if there is a number αP > 0 so that: EP − RF = αP + βP (EM − RF ). The Jensen index, JI P = α, ˆ is the estimator for the constant term of the regression: EP ,t − RF,t = α + β (EI,t − RF,t ). For this, the variable to be explained (explanatory) is the excess of return of portfolio in relation to the risk-free rate (excess of return of market representative index). Its value is, of course, compared to 0.

Equities

103

Example It is easy to verify that with the preceding data, we have J IP = (0.0758 − 0.03) − 0.9774 · (0.04 − 0.03) = 0.0360, which is strictly positive. 3.3.4 Equity portfolio management strategies 3.3.4.1 Passive management The aim of passive management is to obtain a return equal to that of the market. By the deﬁnition of the market, the gains (returns higher than market returns) realised by certain investors will be compensated by losses (returns lower than market returns) suffered by other investors:48 the average return obtained by all the investors is the market return. The reality is a little different: because of transaction costs, the average return enjoyed by investors is slightly less than the market return. The passive strategy therefore consists of: • Putting together a portfolio of identical (or very similar) composition to the market, which corresponds to optimal diversiﬁcation. • Limiting the volume of transactions as far as is possible. This method of operation poses a number of problems. For example, for the management of some types of portfolio, regulations dictate that each security should only be present to a ﬁxed maximum extent, which is incompatible with passive management if a security represents a particularly high level of stock-exchange capitalisation on the market. Another problem is that the presence of some securities that not only have high rates but are indivisible, and this may lead to the construction of portfolios with a value so high that they become unusable in practice. These problems have led to the creation of ‘index funds’, collective investment organisations that ‘imitate’ the market. After choosing an index that represents the market in which one wishes to invest, one puts together a portfolio consisting of the same securities as those in the index (or sometimes simply the highest ones), in the same proportions. Of course, as and when the rates of the constituent equities change, the composition of the portfolio will have to be adapted, and this presents a number of difﬁculties. The reaction time inevitably causes differences between the return on the portfolio and the market return; these are known as ‘tracking errors’. In addition, this type of management incurs a number of transaction costs, for adapting the portfolio to the index, for reinvesting dividends etc. For these reasons, the return on a certain portfolio will in general be slightly lower than that of the index. 3.3.4.2 Active management The aim of active management is to obtain a return higher than the market return. A fully efﬁcient market can be beaten only temporarily and by chance: in the long term, the return cannot exceed the market return. Active management therefore suggests that the market is fully efﬁcient. 48 This type of situation is known in price theory as a zero total game. Refer for example to Binmore K., Jeux et th´eorie des jeux, De Boeck & Larcier, 1999.

104

Asset and Risk Management

Two main principles allow the target set to be achieved. 1) Asset allocation, which evolves over time and is also known as market timing, consists of putting together a portfolio consisting partly of the market portfolio or an index portfolio and partly of a risk-free asset (or one that is signiﬁcantly less risk than equities, such as a bond). The respective proportions of these two components are then changed as time passes, depending on whether a rise or a fall in the index is anticipated. 2) Stock picking consists of putting together a portfolio of equities by choosing the securities considered to be undervalued and likely to produce a return higher than the market return in the near or more distant future (market reaction). In practice, professionals use strategies based on one of the two approaches or a mixture of the two. In order to assess the quality of active management, the portfolio put together should be compared with the market portfolio from the point of view of expected return and of risk incurred. These portfolio performance indexes have been studied in Section 3.3.3. Let us now examine some methods of market timing and a method of stock picking: the application of the dividend discount model.

3.3.4.3 Market timing This technique therefore consists of managing a portfolio consisting of the market portfolio (M) for equities and a bond rate (O) in the respective proportions X and 1 − X, X being adapted according to the expected performance of the two components. These performances, which determine a market timing policy, may be assessed using different criteria: • The price-earning ratio, introduced in Section 3.1.3: PER = rate/proﬁt. • The yield gap, which is the ratio between the return on the bond and the return on the equities (dividend/rate). • The earning yield, which is the product of the PER by the bond rate. • The risk premium, which is the difference between the return on the market portfolio and the return on the bond: RP = EM − EO . It may be estimated using a history, but it is preferable to use an estimation produced beforehand by a ﬁnancial analyst, for example using the DDM (see below). Of course, small values for the ﬁrst three criteria are favourable to investment in equities; the situation is reversed for the risk premium. The ﬁrst method for implementing a market timing policy is recourse to decision channels. If one refers to one of the four criteria mentioned above as c, for which historical observations are available (and therefore an estimation c for its average and sc for its standard deviation), we choose, somewhat arbitrarily, to invest a certain percentage of equities depending on the observed value of c compared to c, the difference between the two being modulated by sc . We may choose for example to invest 70 %, 60 %, 50 %, 40 %

Equities

105

c 30 %

40 %

c

50 %

t

60 %

70 %

Figure 3.29 Fixed decision channels

Figure 3.30 Moving decision channels

or 30 % in equities depending on the position of c in relation to the limits:49 c − 32 sc , c − 1 s , c + 12 sc and c + 32 sc (Figure 3.29). 2 c This method does not take account of the change of the c parameter over time. The c and sc parameters can therefore be calculated over a sliding history (for example, one year) (Figure 3.30). Another, more rigorous method can be used with the risk premium only. In the search for the efﬁcient frontier, we have looked each time for the minimum with respect to the proportions of the expression σP2 − λEP in which the λ parameter corresponds to the risk (λ = 0 for a cautious portfolio, λ = +∞ for a speculative portfolio). This parameter is equal to the slope of the straight line in the plane (E, σ 2 ) tangential to the efﬁcient frontier and coming from the point (RF , 0). According to the separation theorem (see Section 3.3.1), the contact point for this tangent corresponds to the market portfolio (see 2 σM . Figure 3.31) and in consequence we have: λ = EM − RF In addition, the return on portfolio consisting of a proportion X of the market portfolio and a proportion 1 − X of the bond rate is given by RP = XRM + (1 − X)RO , which 49

The order of the channels must be reversed for the risk premium.

106

Asset and Risk Management σP2

2 σM

RF

EM

EP

Figure 3.31 Separation theorem

allows the following to be determined: EP = XEM + (1 − X)EO 2 σP2 = X2 σM + 2X(1 − X)σMO + (1 − X)2 σO2

The problem therefore consists of determining the value of X, which minimises the expression: 2 + 2X(1 − X)σMO + (1 − X)2 σO2 − λ[XEM + (1 − X)EO ]. Z(X) = σP2 − λEP = X2 σM

The derivative of this function: 2 Z (X) = 2XσM + 2(1 − 2X)σMO − 2(1 − X)σO2 − λ(EM − EO ) 2 = 2X(σM − 2σMO + σO2 ) + 2σMO − 2σO2 − λ · RP

provides the proportion sought: X=

λ · RP − 2(σMO − σO2 ) 2 2(σM − 2σMO + σO2 )

or, in the same way, replacing λ and RP by their value: EM − E0 2 · σM − 2(σMO − σO2 ) EM − RF X= 2 2(σM − 2σMO + σO2 ) Example If we have the following data: EM = 0.08 EO = 0.06 RF = 0.04

σM = 0.10 σO = 0.02 ρMO = 0.6

Equities

107

we can calculate successively: σMO = 0.10 · 0.02 · 0.6 = 0.0012 0.102 = 0.25 0.08 − 0.04 PR = 0.08 − 0.06 = 0.02 λ=

and therefore: X=

0.25 · 0.02 − 2 · (0.0012 − 0.022 ) = 0.2125 2 · (0.102 − 2 · 0.0012 + 0.022 )

Under these conditions, therefore, it is advisable to invest 21.25 % in equities (market portfolio) and 78.75 % in bonds. 3.3.4.4 Dividend discount model The aim of the dividend discount model, or DDM, is to compare the expected return of an equity and its equilibrium return, which will allow us to determine whether it is overvalued or undervalued. The expected return, R˜ k , is determined using a model for updating future dividends. A similar reasoning to the type used in the Gordon–Shapiro formula (Section 3.1.3), or a generalisation of that reasoning, can be applied. While the Gordon–Shapiro relation suggests a constant rate of growth for dividends, more developed models (two-rate model) use, for example, a rate of growth constant over several years followed by another, lower rate for subsequent years. Alternatively, a three-rate model may be used with a period of a few years between the two constant-rate periods in which the increasing rate reduces linearly in order to make a continuous connection. The return to equilibrium Ek is determined using the CAPM equation (Section 3.3.1). This equation is written Ek = RF + βk (EM − RF ). If one considers that it expresses Ek as a function of βk , we are looking at a straight-line equation; the line passes through the point (0, RF ) and since βM = 1, through the point (1, EM ). This straight line is known as the ﬁnancial asset evaluation line or the security market line. If the expected return R˜ k for each security is equal to its return on equilibrium Ek , all the points (βk , R˜ k ) will be located on the security market line. In practice, this is not the case because of certain inefﬁciencies in the market (see Figure 3.32). ~ Rk • • EM

•

•

•

•

• RF

•

• •

•

1

Figure 3.32 Security market line

bk

108

Asset and Risk Management

This technique considers that the R˜ k evaluation made by the analysts is correct and that the differences noted are due to market inefﬁciency. Therefore, the securities whose representative point is located above the security market line are considered to be undervalued, and the market should sooner or later rectify the situation and produce an additional return for the investor who purchased the securities.

3.4 EQUITY DYNAMIC MODELS The above paragraphs deal with static aspects, considering merely a ‘photograph’ of the situation at a given moment. We will now touch on the creation of models for developments in equity returns or rates over time. The notation used here is a little different: the value of the equity at moment t is noted as St . This is a classic notation (indicating ‘stock’), and in addition, the present models are used among other things to support the development of option valuation models for equities (see Section 5.3), for which the notation Ct is reserved for equity options (indicating ‘call’). Finally, we should point out that the following, unless speciﬁed otherwise, is valid only for equities that do not give rise to the distribution of dividends. 3.4.1 Deterministic models 3.4.1.1 Discrete model Here, the equity is evaluated at moments t = 0, 1, etc. If it is assumed that the return on St+1 − St , which leads the equity between moments t and t + 1 is i, we can write: i = St to the evolution equation St+1 = St · (1 + i). If the rate of return i is constant and the initial value S0 is taken into account, the equation (with differences) above will have the solution: St = S0 · (1 + i)t . If the rate varies from period to period (ik for the period] k − 1; k]), the previous relation becomes St = S0 . (1 + i1 ) (1 + i2 ) . . . (1 + it ). 3.4.1.2 Continuous model We are looking here at an inﬁnitesimal development in the value of the security. If it is assumed that the return between moments t and t + t (with ‘small’ t) is proportional to the duration t with a proportionality factor δ: δ · t =

St+ t − St St

the evolution equation is a differential equation50 St = St · δ. The solution to this equation is given by St = S0 · eδt . The link will be noted between this relation and the relation corresponding to it for the discrete case, provided δ = ln (1 + i). 50

Obtained by making t tend towards 0.

Equities

109

If the rate of return δ is not constant, the differential development equationwill take t δ(t) dt the form S t = St · δ (t), thus leading to the more complex solution St = S0 · e 0 . Note The parameters appear in the above models (the constant rates i and δ, or the variable rates i1 , i2 , . . . and δ(t)) should of course for practical use be estimated on the basis of historical observations. 3.4.1.3 Generalisation These two aspects, discrete and continuous, can of course be superimposed. We therefore consider: • A continuous evolution of the rate of return, represented by the function δ(t). On top of this: • A set of discrete variations occurring at periods τ1 , τ2 , . . . , τn so that the rate of return between τk−1 and τk is equal to ik . If n is the greatest integer so that τn ≤ t, the change in the value is given by τ1

τ2 −τ1

St = S0 · (1 + i1 ) (1 + i2 )

τn −τn−1

. . . (1 + in )

t−τn

(1 + in+1 )

·e

t 0

δ(t) dt

.

This presentation will allow the process of dividend payment, for example, to be taken into consideration in a discrete or continuous model. Therefore, where the model includes only the continuous section represented by δ(t), the above relation represents the change in the value of an equity that pays dividends at periods τ1 , τ2 etc. with a total Dk paid in τk and linked to ik by the relation ik = −

Dk Sk(−)

Here, Sk (−) is the value of the security just before payment of the k th dividend. 3.4.2 Stochastic models 3.4.2.1 Discrete model It is assumed that the development from one period to another occurs as follows: equity at moment t has the (random) value St and will at the following moment t + 1 have one of the two values St .u (higher than St ) or St .d (lower than St ) with the respective probabilities of α and (1 − α). We therefore have d ≤ 1 ≤ u, but it is also supposed that d ≤ 1 < 1 + RF ≤ u, without which the arbitrage opportunity will clearly be possible. In practice, the parameters u, d and α should be estimated on the basis of observations.

110

Asset and Risk Management

Generally speaking, the following graphic representation is used for evolutions in equity prices: S = St · u (α) −−→ t+1 − St − −−→ St+1 = St · d (1 − α) It is assumed that the parameters u, d and α remain constant over time and we will no longer clearly show the probability α in the following graphs; the rising branches, for example, will always correspond to the increase (at the rate u) in the value of the security with the probability α. Note that the return of the equity between the period t and (t + 1) is given by St+1 − St u − 1 (α) = d − 1 (1 − α) St Between the moments t + 1 and t + 2, we will have, in the same way and according to the branch obtained at the end of the previous period: 2 → St+2 = St+1 · u = St · u − − − St+1−− −→ S = S · d = S · ud t+2 t+1 t

or

St+2 = St+1 · u = St · ud −−→ − St+1− −−→ St+2 = St+1 · d = St · d 2

It is therefore noted that a rise followed by a fall leads to the same result as a fall followed by a rise. Generally speaking, a graph known as a binomial trees can be constructed (see Figure 3.33), rising from period 0 (when the equity has a certain value S0 ) to the period t. It is therefore evident that the (random) value of the equity at moment t is given by St = S0 · uN d t−N , in which the number N of rises is of course a random binomial variable51 with parameters (t; α): t α k (1 − α)t−k Pr[N = k] = k The following property can be demonstrated: E(St ) = S0 · (αu + (1 − α)d)t

S0 . u S0

S0 . d

S0 . u2 S0 . ud S0 . d2

S0 . u3

…

S0 . u2d

…

S0 . ud2

…

S0 . d2

…

Figure 3.33 Binomial tree 51

See Appendix 2 for the development of this concept and for the properties of the random variable.

Equities

111

In fact, what we have is: E(St ) =

t k=0

= S0 ·

k t−k

S0 · u d

t t k=0

k

t α k (1 − α)t−k · k

(αu)k ((1 − α)d)t−k

This leads to the relation declared through the Newton binomial formula. Note that this property is a generalisation for the random case of the determinist formula St = S0 · (1 + i)t . 3.4.2.2 Continuous model The method of equity value change shown in the binomial model is of the random walk type. At each transition, two movements are possible (rise or fall) with unchanged probability. When the period between each transaction tends towards 0, this type of random sequence converges towards a standard Brownian motion or SBM.52 Remember that we are looking at a stochastic process wt (a random variable that is a function of time), which obeys the following processes: • w0 = 0. • wt is a process with independent increments : if s < t < u, then wu − wt is independent of wt − ws . • wt is a process with stationary increments : the random variables wt+h − wt and wh are identically distributed. • Regardless of what t may be, the random variable √ wt is distributed according to a normal law of zero mean and standard deviation t: fwt (x) = √

1 2πt

e−x

2

/2t

The ﬁrst use of this process for modelling the development in the value of a ﬁnancial asset was produced by L. Bachelier.53 He assumed that the value of a security at a moment t is a ﬁrst-degree function of the SBM: St = a + bwt . According to the above deﬁnition, a is the value of the security at t = 0 and b is a measure of the volatility σ of the security for each unit of time. The relation used was therefore St = S0 + σ · wt . The shortcomings of this approach are of two types: • The same absolute variation (¤10 for example) corresponds to variations in return that are very different depending on the level of price (20 % for a quotation of ¤50 and 5 % for a value of ¤200). • The √ random variable St follows a normal law with mean S0 and standard deviation σ t; this model therefore allows for negative prices. 52

Appendix 2 provides details of the results, reasoning and properties of these stochastic processes. Bachelier L., Th´eorie de la sp´eculation, Gauthier-Villars, 1900. Several more decades were to pass before this reasoning was ﬁnally accepted and improved upon. 53

112

Asset and Risk Management

For this reason, P. Samuelson54 proposed the following model. During the short interval of time [t; t + dt], the return (and not the price) alters according to an Itˆo process : St+dt − St dSt = = ER · dt + σR · dwt St St Here, the non-random term (the trend) is proportional to the expected return and the stochastic term involves the volatility for each unit of time in this return. This model is termed a geometric Brownian motion. Example Figure 3.34 shows a simulated trajectory (development over time) for 1000 very short periods with the values ER = 0.1 and σR = 0.02, based on a starting value of S0 = 100. We can therefore establish the ﬁrst property in the context of this model: the stochastic process St showing the changes in the value of the equity can be written as σR2 St = S0 · exp ER − · t + σR · wt 2 This shows that St follows a log-normal distribution (it can only take on positive values). In fact, application of the Itˆo formula55 to the function f (x, t) = ln ·x where x = St , we obtain: 1 1 2 2 1 d(ln St ) = 0 + ER St − 2 σR St · dt + σR St · dwt St St 2St 2 σ = ER − R · dt + σR · dwt 2 σ2 This equation resolves into: ln St = C ∗ + ER − R · t + σR · wt 2 101.2 101 100.8 100.6 100.4 100.2 100 99.8 99.6 99.4 99.2

1

101

201

301

401

501

601

701

801

901

Figure 3.34 Geometric Brownian motion 54 55

Samuelson P., Mathematics on speculative price, SIAM Review, Vol. 15, No. 1, 1973. See Appendix 2.

1001

Equities

113

The integration constant C ∗ is of course equal to ln S0 and the passage to the exponential gives the formula declared. It is then easy to deduce the moments of the random variable St : E(St ) = S0 · eER t var(St ) = S02 · e2ER t (eσR t − 1) 2

The ﬁrst of these relations shows that the average return E(St /S0 ) on this equity over the interval [0; t] is equivalent to a capitalisation at the instant rate, ER . A second property can be established, relative to the instant return on the security over the interval [0; t]. This return obeys a normal distribution with mean and standard deviation, shown by σR2 σR ER − ;√ 2 t This result may appear paradoxical, as the average of the return is not equal to ER . This is because of the structure of the stochastic process and is not incompatible with the intuitive solution, as we have E(St ) = S0 · eER t . To establish this property, expressing the stochastic instant return process as δt , we can write St = S0 · eδt ·t, that is, according to the preceding property, 1 St δt = · ln t S0 σR2 wt = ER − + σR · 2 t This establishes the property.

4 Bonds 4.1 CHARACTERISTICS AND VALUATION To an investor, a bond is a ﬁnancial asset issued by a public institution or private company corresponding to a loan that confers the right to interest payments (known as coupons) and repayment of the loan upon maturity. It is a negotiable security and its issue price, redemption value, coupon total and life span are generally known and ﬁxed beforehand. 4.1.1 Deﬁnitions A bond is characterised by various elements: 1. The nominal value or NV of a bond is the amount printed on the security, which, along with the nominal rate of the bond, allows the coupon total to be determined. 2. The bond price is shown as P . This may be the price at issue (t = 0) or at any subsequent moment t. The maturity price is of course identical to the redemption value or R mentioned above. 3. The coupons Ct constitute the interest paid by the issuer. These are paid at various periods, which are assumed to be both regular and annual (t = 1, 2, . . . , T ). 4. The maturity T represents the period of time that separates the moment of issue and the time of reimbursement of the security. The ﬁnancial ﬂows associated with a bond are therefore: • From the purchaser, the payment of its price; this may be either the issue price paid to the issuer or the rate of the bond paid to any seller at a time subsequent to the issue. • From the issuer, the payment of coupons from the time of acquisition onwards and the repayment on maturity. The issue price, nominal value and repayment value are not necessarily equal. There may be premiums (positive or negative) on issue and/or on repayment. The bonds described above are those that we will be studying in this chapter; they are known as ﬁxed-rate bonds. There are many variations on this simple bond model. It is therefore possible for no coupons to be paid during the bond’s life span, the return thus being only the difference between the issue price and the redemption value. This is referred to as a zero-coupon bond .1 This kind of security is equivalent to a ﬁxed-rate investment. There are also bonds more complex than those described above, for example:2 • Variable rate bonds, for which the value of each coupon is determined periodically according to a parameter such as an index. 1

A debenture may therefore, in a sense, be considered to constitute a superimposition of zero-coupon debentures. Read for example Colmant B., Delfosse V. and Esch L., Obligations, Les notions ﬁnanci`eres essentielles, Larcier, 2002. Also: Fabozzi J. F., Bond Markets, Analysis and Strategies, Prentice-Hall, 2000. 2

116

Asset and Risk Management

• Transition bonds, which authorise repayment before the maturity date. • Lottery bonds, in which the (public) issuer repays certain bonds each year in a draw. • Convertible bonds (convertible into equities) etc. 4.1.2 Return on bonds The return on a bond can of course be calculated by the nominal rate (or coupon rate) rn , which is deﬁned as the relation between the total value of the coupon and the nominal value C rn = NV This deﬁnition, however, will only make sense if all the different coupons have the same value. It can be adapted by replacing the denominator with the price of the bond at a given moment. The nominal rate is of limited interest, as it does not include the life span of the bond at any point; using it to describe two bonds is therefore rather pointless. For a ﬁxed period of time (such as one year), it is possible to use a rate of return equivalent to the return on one equity: Pt + Ct − Pt−1 Pt−1 This concept is, however, very little used in practice. 4.1.2.1 Actuarial rate on issue The actuarial rate on issue, or more simply the actuarial rate (r) of a bond is the rate for which there is equality between the discounted value of the coupons and the repayment value on one hand and the issue price on the other hand: P =

T

Ct (1 + r)−t + R(1 + r)−T

t=1

Example Consider for example a bond with a period of six years and nominal value 100, issued at 98 and repaid at 105 (issue and reimbursement premiums 2 and 5 respectively) and a nominal rate of 10 %. The equation that deﬁnes its actuarial rate is therefore: 98 =

10 10 10 10 10 10 + 105 + + + + + 1+r (1 + r)2 (1 + r)3 (1 + r)4 (1 + r)5 (1 + r)6

This equation (sixth degree for unknown r) can be resolved numerically and gives r = 0.111044, that is, r = approximately 11.1 %. The actuarial rate for a zero-coupon bond is of course the rate for a risk-free investment, and is deﬁned by P = R(1 + r)−T

Bonds

117

The rate for a bond issued and reimbursable at par (P = N V = R), with coupons that are equal (Ct = C for all t) is equal to the nominal rate: r = rn . In fact, for this particular type of bond, we have: P =

T

C(1 + r)−t + P (1 + r)−T

t=1

=C

(1 + r)−1 − (1 + r)−T −1 + P (1 + r)−T 1 − (1 + r)−1

=C

1 − (1 + r)−T + P (1 + r)−T r

From this, it can be deduced that r = C/P = rn . 4.1.2.2 Actuarial return rate at given moment The actuarial rate as deﬁned above is calculated when the bond is issued, and is sometimes referred to as the ex ante rate. It is therefore assumed that this rate will remain constant throughout the life of the security (and regardless of its maturity date). A major principle of ﬁnancial mathematics (the principle of equivalence) states that this rate does not depend on the moment at which the various ﬁnancial movements are ‘gathered in’. Example If, for the example of the preceding paragraph (bond with nominal value of 100 issued at 98 and repaid at 105), paying an annual coupon of 10 at the end of each of the security’s six years of life) and with an actuarial rate of 11.1 %, one examines the value acquired for example on the maturity date, we have: • for the investment, 98 · (l + r)6 ; • for the generated ﬁnancial ﬂows: 10 · [(l + r)5 + (l + r)4 + (l + r)3 + (l + r)3 + (l + r)2 + (l + r)1 + l] + 105. The equality of these two quantities is also realised for r = 11.1 %. If we now place at a given moment t anywhere between 0 and T , and are aware of the change in the market rate between 0 and t, the actuarial rate of return at the moment3 t, which we will call4 r(t), is the rate for which there is equality between: 1. The value of the investment acquired at t, calculated at this rate r(t). 2. The sum of: — The value of the coupons falling due acquired at t, reinvested at the current rate observed between 0 and t. 3 4

This is sometimes known as the ex post rate. r(0) = r is of course the actuarial rate at issue.

118

Asset and Risk Management

— The discounted value in t of the ﬁnancial ﬂows generated subsequent to t, calculated using the market rate at the moment t. Example Let us take the same example as above. Suppose that we are at the moment in time immediately subsequent to payment of the third coupon (t = 3) and the market rate has remained at 11.1 % for the ﬁrst two years and has now changed to 12 %. The above deﬁnition gives us the equation that deﬁnes the actuarial rate of return for the speciﬁc moment t = 3. 10 10 115 98 · (1 + r(3))3 = (10 · 1.111 · 1.12 + 10 · 1.12 + 10) + + + 1.12 1.122 1.123 = 35.33 + 98.76 = 134.09 This gives r(3) = 11.02 %. It will of course be evident that if the rate of interest remains constant (and equal to 11.1) for the ﬁrst three years, the above calculation would have led to r(3) = 11.1 %, this being consistent with the principle of equivalence. This example clearly shows the phenomenon of bond risk linked to changes in interest rates. This phenomenon will be studied in greater detail in Section 4.2.1. 4.1.2.3 Accrued interest When a bond is acquired between two coupon payment dates, the purchaser pays not only the value of the bond for that speciﬁc moment but also the portion of the coupon to come, calculated in proportion to the period that has passed since payment of the last coupon. The seller, in fact, has the right to partial interest relating to the period from the last coupon payment to the moment of the deal. This principle is called the accrued interest system and the price effectively paid is the dirty price, as opposed to the clean price, which represents merely the rate for the bond at the time of the deal. Let us consider a bond of maturity T and non integer moment t + θ (integer t and 0 ≤ θ 0, constitutes the term interest-rate structure at moment 0 and the graph for this function is termed the yield curve. The most natural direction of the yield curve is of course upwards; the investor should gain more if he invests over a longer period. This, however, is not always the case; in practice we frequently see ﬂat curves (constant R(s) value) as well as increasing curves, as well as inverted curves (decreasing R(s) value) and humped curves (see Figure 4.4). R(s)

R(s)

s

R(s)

s

R(s)

s

s

Figure 4.4 Interest rate curves 13 A detailed presentation of these concepts can be found in Bisi`ere C., La Structure par Terme des Taux d’int´erˆet, Presses Universitaires de France, 1997. 14 This justiﬁes the title of this present section, which mentions ‘interest rates’ and not bonds.

130

Asset and Risk Management

4.3.2 Static interest rate structure The static models examine the structure of interest rates at a ﬁxed moment, which we will term 0, and deal with a zero-coupon bond that gives rise to a repayment of 1, which is not a restriction. In this and the next paragraph, we will detail the model for the discrete case and then generalise it for the continuous case. These are the continuous aspects that will be used in Section 4.5 for the stochastic dynamic models. 4.3.2.1 Discrete model The price at 0 for a bond of maturity level s is termed15 P0 (s) and the associated spot rate is represented by R0 (s). We therefore have: P0 (s) = (1 + R0 (s))−s . The spot interest rate at R0 (s) in fact combines all the information on interest rates relative to period [0,1], [1, 2] . . ., [s − 1, s]. We will give the symbol r(t) and the term term interest rate or short-term interest rate to the aspects relative to the period [t − 1; t]. We therefore have: (1 + R0 (s))s = (1 + r(1)). (1 + r(2)). . . .. (1 + r(s)). Reciprocally, it is easy to express the terms according to the spot-rate terms: r(1) = R0 (1) (1 + R0 (s))s s = 2, 3, . . . 1 + r(s) = (1 + R0 (s − 1))s−1 In the same way, we have: r(s) =

P0 (s − 1) −1 P0 (s)

(s > 0)

To sum up, we can easily move from any one of the following three structures to another; the price structure {R0 (s) : s = 1, 2, . . .} and the term interest structure {r(s) : s = 1, 2, . . .}. Example Let us consider a spot rate structure deﬁned for maturity dates 1–6 shown in Table 4.1. This (increasing) structure is shown in Figure 4.5. From this, it is easy to deduce prices and term rates: for example: P0 (5) = 1.075−5 = 0.6966 r(5) = 1.0755 /1.0734 − 1 = 0.0830 This generally gives data shown in Table 4.2. Table 4.1 s R0 (s) 15

Of course, P0 (0) = 1.

Spot-rate structure 1

2

3

4

5

6

6.0 %

6.6 %

7.0 %

7.3 %

7.5 %

7.6 %

Bonds 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0

0

1

2

3 4 Maturity dates

5

6

131

7

Figure 4.5 Spot-rate structure

Table 4.2

Price and rate structures at 0

s

R0 (s)

P0 (s)

r(s)

0 1 2 3 4 5 6

0.060 0.066 0.070 0.073 0.075 0.076

1.0000 0.9434 0.8800 0.8163 0.7544 0.6966 0.6444

0.0600 0.0720 0.0780 0.0821 0.0830 0.0810

4.3.2.2 Continuous model If the time set is [0; +∞], we retain the same deﬁnitions and notations for the price structures and spot rates: {P0 (s): s > 0] and {R0 (s) : s > 0}. This last will be an instant rate; after a period s, a total 1 will become, at this rate : es·R0 (s) . We will also note, before taking limits, R0 d (s) being the spot rate for the discrete model (even applied to a non integer period). It is therefore linked to the spot rate for the continuous model by the relation R0 (s) = ln(1 + R0 d (s)). With regard to the term rate, we are provisionally introducing the notation r(t1 , t2 ) to represent the interest rate relative to the period [t1 ;t2 ] and we deﬁne the instant term interest rate by: s 1 r(t) = lim r(t, u) du s→t+ s − t t We can readily obtain, as above: s+s 1 + R0d (s + s) = s

1 + R0d (s)

(1 + r(s, s + s))

s

Thanks to the Taylor formula, this is written: [1 + s.r(s, s + s) + O((s)2 )].(1 + R0d (s))s = (1 + R0d (s + s))s+s

132

Asset and Risk Management

This relation can be rewritten as: s+s s

s

− 1 + R0d (s) 1 + R0d (s + s) d r(s, s + s) · 1 + R0 (s) + O(s) = s After taking the limit, this becomes: s

(1 + R0d (s))s d d r(s) = = ln 1 + R (s) = s · ln 1 + R (s) = [s · R0 (s)] s 0 0 d 1 + R0 (s) This relation, which expresses the spot rate according to the instant term rate, can easily be inverted by integrating: 1 s R0 (s) = r(u) du s 0 It can also be expressed easily by saying that the spot rate for the period [0; s] is the average of the instant term rate for the same period. The price is of course linked to the two rates by the relations: P0 (s) = e−s·R0 (s) = e

−

s 0

r(u) du

Note For a ﬂat rate structure (that is, R0 (s) independent of s), it is easy to see, by developing the relation [s · R0 (s)] = r0 (s), that R0 (s) = r(s) = r for every s and that the price structure is given, P0 (s) = e−rs . 4.3.3 Dynamic interest rate structure The dynamic models examine the structure of the interest rates at any given moment t. They always deal with zero-coupon bonds, issued at 0 and giving rise to a repayment of 1. They may allow the distortions in the rate curve to be taken into account; in fact, we will be studying the link that exists between price and rate structures for the various observation periods. 4.3.3.1 Discrete model The price at the moment t for the bond issued at 0 and maturing at s is termed16 Pt (s). The term Rt (s) is given to the spot rate relative to the interval] t; s]. Finally, the term rate relative to the period] t − 1; t] is termed r(t). 16

It is of course supposed that 0 < t < s.

Bonds

133

Following reasoning similar in every way to that used for the static models, we will readily obtain the relations Pt (s) = (1 + Rt (s))−(s−t) (1 + Rt (s))s−t = (1 + r(t + 1)) · (1 + r(t + 2)) · . . . · (1 + r(s)) This will invert readily to r(t + 1) = Rt (t + 1) (1 + Rt (s))s 1 + r(s) = (1 + Rt (s − 1))s−1

s = t + 2, t + 3, . . .

We also have, between the structure of the prices and that of the interest rates: r(s) =

Pt (s − 1) −1 Pt (s)

(s > t)

The link between the price structures at different observation times is expressed by the following relation: Pt (s) = [(1 + r(t + 1)) · (1 + r(t + 2)) · . . . · (1 + r(s))]−1 (1 + r(t)) · (1 + r(t + 1)) · (1 + r(t + 2)) · . . . · (1 + r(s)) −1 = (1 + r(t)) =

(1 + Rt−1 (s))−(s−t+1) (1 + Rt−1 (t))−1

=

Pt−1 (s) Pt−1 (t)

This result can easily be generalised, whatever u may be, placed between t and s (t ≤ u ≤ s), we have: Pu (s) Pt (s) = Pu (t) From this relation it is possible to deduce a link, which, however, has a rather ungainly expression, between the spot-rate structures at the various times. Example Let us take once again the spot interest-rate structure used in the previous paragraph: 6 %, 6.6 %, 7 %, 7.3 %, 7.5 % and 7.6 % for the respective payment dates at 1, 2, 3, 4, 5 and 6 years. Let us see what happens to the structure after two years. We can ﬁnd easily: P2 (5) =

P0 (5) 0.69656 = = 0.7915 P0 (2) 0.88001 −1

R2 (5) = P2 (5) 5−2 − 1 = 0.7915−1/3 − 1 = 0.0810

134

Asset and Risk Management Table 4.3

Price and rate structures at 2

s

P2 (s)

R2 (s)

2 3 4 5 6

1.0000 0.9276 0.8573 0.7915 0.7322

0.0780 0.0800 0.0810 0.0810

and more generally as shown in Table 4.3. Note that we have: r(5) =

P0 (4) P2 (4) −1= − 1 = 0.0830 P0 (5) P2 (5)

4.3.3.2 Continuous model The prices Pt (s) and the spot rates Rt (s) are deﬁned as for the static models, but with an observation at moment t instead of 0. The instant term rates r(t) are deﬁned in the same way. It can easily be seen that the relations that link the two are:

r(s) = [(s − t) · Rt (s)] s s 1 Rt (s) = r(u) du s −t t

∀t

Meanwhile, the relations that link rates to prices are given by: s − r(u) du Pt (s) = e−(s−t)·Rt (s) = e t 4.3.4 Deterministic model and stochastic model The relations mentioned above have been established in a deterministic context. Among other things, the short instant rate and the term rate have been assimilated. More generally (stochastic model), the following distinction should be made. 1. The instant term rate, deﬁned by: r(t) = lim Rt (s). s→t+ 2. The instant term or forward rate, deﬁned as follows: if ft (s1 , s2 ) represents the rate of interest seen since time t for a bond issued at s1 and with maturity at s2 , the forward rate (in s seen from t, with t < s) is: ft (s) = lim ft (s, u). u→s+

In a general model, this forward rate must be used to ﬁnd the price and spot-rate structures: s − f (u) du Pt (s) = e t t s 1 Rt (s) = ft (u) du s −t t

Bonds

135

It can easily be seen that these two rates (instant term and forward) are linked by the relation r(t) = ft (t). It can be demonstrated that in the deterministic case, ft (s) is independent of t and the two rates can therefore be identiﬁed: ft (s) = r(s). It is therefore only in this context that we have: s − r(u) du Pt (s) = e t s 1 Rt (s) = r(u) du s−t t Pt (s) =

Pu (s) Pu (t)

4.4 BOND PORTFOLIO MANAGEMENT STRATEGIES 4.4.1 Passive strategy: immunisation The aim of passive management is to neutralise the portfolio risk caused by ﬂuctuations in interest rates. 4.4.1.1 Duration and convexity of portfolio Let us consider a bond portfolio consisting at moment 0 of N securities (j = 1, . . . , N ), each characterised by: • • • • •

a maturity (residual life) Tj ; coupons yet to come Cj , t (t = 1, . . . , Tj ); a repayment value Rj ; an actuarial rate on issue rj ; a price Pj .

The highest of the maturity values Tj will be termed T , and Fj,t the ﬁnancial ﬂow generated by the security j at the moment t: Fj,t

Cj = CTj + Rj 0

if t < Tj if t = Tj if t > Tj

The duration of the j th security is given by Tj

Dj =

T

t · Cj,t (1 + rj )−t + Tj · Rj (1 + rj )−Tj

t=1

=

Tj

t=1

Cj,t (1 + rj )−t + Rj (1 + rj )−Tj

t · Fj,t (1 + rj )−t

t=1

Pj

136

Asset and Risk Management

Finally, let us suppose that the j th security is present within the portfolio in the number nj . The discounted ﬁnancial ﬂow generated by the portfolio at moment t totals: N

nj Fj,t (1 + rj )−t

j =1

Its price totals: N j =1 nj Pj . The duration of the portfolio can therefore be written as: T

DP =

t=1

t·

N

nj Fj,t (1 + rj )−t

j =1 N

nk Pk

k=1

=

N j =1

nj N

nk Pk

Tj

t · Fj,t (1 + rj )−t

t=1

k=1 Tj

=

N j =1

nj Pj N

·

t · Fj,t (1 + rj )−t

t=1

Pj

nk Pk

k=1

=

N

Xj Dj

j =1

nj Pj Where: Xj = N represents the proportion of the j th security within the portfolio, n P k k k=1 expressed in terms of capitalisation. The same reasoning will reveal the convexity of the portfolio: CP =

N

Xj Cj

j =1

4.4.1.2 Immunising a portfolio A portfolio is said to be immunised at horizon H if its value at that date is at least the value that it would have had if interest rates had remained constant during the period [0; H ]. By applying the result arrived at in Section 4.2.2 for a bond in the portfolio, we obtain the same result: a bond portfolio is immunised at a horizon that corresponds to its duration.

Bonds

137

Of course, whenever the interest rate changes, the residual duration varies suddenly. A careful bond portfolio manager wishing to immunise his portfolio for a horizon H that he has ﬁxed must therefore: • Put together a portfolio with duration H . • After each (signiﬁcant) interest rate change, alter the composition of the portfolio by making sales and purchases (that is, alter the proportions of Xj ) so that the residual duration can be ‘pursued’. Of course these alterations to the portfolio composition will incur transaction charges, which should be taken into consideration and balanced against the beneﬁts supplied by the immunising strategy. Note It was stated in Section 4.2.3 that of two bonds that present the same return (actuarial rate) and duration, the one with the higher convexity will be of greater interest. This result remains valid for a portfolio, and the manager must therefore take it into consideration whenever revising his portfolio. 4.4.2 Active strategy The aim of active management is to obtain a return higher than that produced by immunisation, that is, higher than the actuarial return rate on issue. In the case of increasing rates (the commonest case), when the rate curve remains unchanged over time, the technique is to purchase securities with a higher maturity than the investment horizon and to sell them before their maturity date.17 Example Let us take once again the rate structure shown in the previous section (Table 4.4). Let us suppose that the investor ﬁxes a two-year horizon. If he simply purchases a security with maturity in two years, he will simply obtain an annual return of 6.6 %. In addition, the return over two years can be calculated by √ 1 − 0.8800 = 0.1364 and 1.1364 = 1.066 0.8800 Table 4.4

Price and rate structures

S

R0 (s)

P0 (s)

r(s)

0 1 2 3 4 5 6

0.060 0.066 0.070 0.073 0.075 0.076

1.0000 0.9434 0.8800 0.8163 0.7544 0.6966 0.6444

0.06000 0.07203 0.07805 0.08205 0.08304 0.08101

17 If the rate curve is ﬂat and remains ﬂat, the strategy presented will produce the same return as the purchase of a security with a maturity equivalent to the investment horizon.

138

Asset and Risk Management

If he purchases a security with maturity in ﬁve years (at a price of 0.6966) and sells it on after two years (at the three-year security price if the rate curves remain unchanged, that is 0.8163), he will realise a total return of 0.8163 − 0.6966 = 1.1719 0.6966

√ This will give 1.1719 = 1.0825, that is, an annual return of 8.25 %, which is of considerably greater interest than the return (6.6 %) obtained with the two-year security. Note that we have an interpretation of the term rate here, as the total return for the period [3; 5], effectively used, is given by (1 + r(4)) · (1 + r(5)) = 1.0821 · 1.0830 = 1.1719. The interest rate obtained using this technique assumes that the rate curve remains unchanged over time. If, however the curve, and more speciﬁcally the spot rate used to calculate the resale price, ﬂuctuates, the investor will be exposed to the interest-rate ﬂuctuation risk. This ﬂuctuation will be favourable (unfavourable) to him if the rate in question falls (rises). In this case, the investor will have to choose between a safe return and a higher but potentially more risky return. Example With the same information, if after the purchase of a security with maturity in ﬁve years the spot rate for the three-year security shifts from 7.6 % to 8 %, the price of that security will fall from 0.8163 to 0.7938 and the return over the two years will be 0.7938 − 0.6966 = 1.1396 0.6966

We therefore have

√ 1.1396 = 1.0675, which corresponds to an annual return of 6.75 %.

4.5 STOCHASTIC BOND DYNAMIC MODELS The models presented here are actually generalisations of the deterministic interest-rate structures. The aim is to produce relations that govern changes in price Pt (s) and spot rates Rt (s). There are two main categories of these models: distortion models and arbitrage models. The distortion models examine the changes in the price Pt (s) when the interest-rate structure is subject to distortion. A simple model is that of Ho and Lee,18 in which the distortion of the rate curve shows in two possible movements in each period; it is therefore a binomial discrete type of model. A more developed model is the Heath, Jarrow and Morton model,19 which has a discrete and a continuous version and in which the distortions to the rate curve are more complex. The arbitrage models involve the compilation, and where possible the resolution, of an equation with partial derivatives for the price Pt (s, v1 , v2 , . . .) considered as a function of t, v1 , v2 , . . . (s ﬁxed), using: 18 Ho T. and Lee S., Term structure movement and pricing interest rate contingent claims, Journal of Finance, Vol. 41, No. 5., 1986, pp. 1011–29. 19 Heath D., Jarrow R. and Morton A., Bond Pricing and the Term Structure of Interest Rates: a New Methodology, Cornell University, 1987. Heath D., Jarrow R. and Morton A., Bond pricing and the term structure of interest rates: discrete time approximation, Journal of Financial and Quantitative Analysis, Vol. 25, 1990, pp. 419–40.

Bonds

139

• the absence of arbitrage opportunity; • hypotheses relating to stochastic processes that govern the evolutions in the state variables v1 , v2 etc. The commonest of the models with just one state variable are the Merton model,20 the Vasicek model21 and the Cox, Ingersoll and Ross model;22 all these use the instant term rate r(t) as the state variable. The models with two state variables include: • The Brennan and Schwarz model,23 which uses the instant term rate r and the long rate l as variables. • The Nelson and Schaefer model24 and the Schaefer and Schwartz model,25 for which the state variables are the long rate l and the spread s = l − r. • The Richard model,26 which uses the instant term rate and the rate of inﬂation. • The Ramaswamy and Sundaresan model,27 which takes the instant market price of risk linked to the risk of default alongside the instant term rate. In this section we will be dealing with only the simplest of arbitrage models: after a general introduction to the principle of these models (Section 4.5.1), we will examine in succession the Vasicek model (Section 4.5.2) and the Cox, Ingersoll and Ross model28 (Section 4.5.3). Finally, in Section 4.5.4, we will deal with the concept of ‘stochastic duration’. 4.5.1 Arbitrage models with one state variable 4.5.1.1 General principle It is once again stated (see Section 4.3) that the stochastic processes of interest to us here are: • The price Pt (s) in t of a zero-coupon bond (unit repayment value) maturing at the moment s (with t < s). The spot rate Rt (s), linked to the price by the relation Pt (s) = e−(s−t)Rt (s) 20 Merton R., Theory of rational option pricing, Bell Journal of Economics and Management Science, Vol. 4, No. 1, 1973, pp. 141–83. 21 Vasicek O., An equilibrium characterisation of the term structure, Journal of Financial Economics, Vol. 5, No. 2, 1977, pp. 177–88. 22 Cox K., Ingersoll J. and Ross J., A theory of the term structure of interest rates, Econometrica, Vol. 53, No. 2, 1985, pp. 385–406. 23 Brennan M. and Schwartz E., A continuous time approach to the pricing of bonds, Journal of Banking and Finance, Vol. 3, No. 2, 1979, pp. 133–55. 24 Nelson J. and Schaefer S., The dynamics of the term structure and alternative portfolio immunization strategies, in Bierwag D., Kayfman G. and Toevs A., Innovations in Bond Portfolio Management: Duration Analysis and Immunization, JAI Press, 1983. 25 Schaefer S. and Schwartz E., A two-factor model of the term structure: an approximate analytical solution, Journal of Financial and Quantitative Analysis, Vol. 19, No. 4, 1984, pp. 413–24. 26 Richard S., An arbitrage model of the term structure of interest rates, Journal of Financial Economics, Vol. 6, No. 1, 1978, pp. 33–57. 27 Ramaswamy K. and Sundaresan M., The valuation of ﬂoating-rate instruments: theory and evidence, Journal of Financial Economics, Vol. 17, No. 2, 1986, pp. 251–72. 28 The attached CD-ROM contains a series of Excel ﬁles that show simulations of these stochastic processes and its rate curves for the various models, combined together in the ‘Ch4’ ﬁle.

140

Asset and Risk Management

• The instant term rate, which we will refer hereafter as rt 29 or r if there is no risk of confusion, and which is the instant rate at moment t, being written as 1 s→t+ s − t

rt = lim Rt (s) = lim s→t+

s

t

ft (u) du

It is this instant term rate that will be the state variable. The price and spot rate will be written as Pt (s, r) and Rt (s, r) and will be considered as functions of the variables t and r alone, the maturity date s being ﬁxed. In addition, it is assumed that these expressions are random via the intermediary of rt only. It is assumed here that the changes in the state variable rt are governed by the general stochastic differential equation30 drt = a(t, rt ) dt + b(t, rt ) dwt , where the coefﬁcients a and b respectively represent the average instant return of the instant term rate and the volatility of that rate, and wt is the standard Brownian motion. Applying the Itˆo formula to the function Pt (s, rt ) leads to the following, with simpliﬁed notations: dPt (s, rt ) = (Pt + Pr a + 12 Prr b2 ) · dt + Pr b · dwt = Pt (s, rt ) · µt (s, rt ) · dt − Pt (s, rt ) · σt (s, rt ) · dwt Here, we have:

P + Pr a + 12 Prr b2 µt = t P σ = − Pr b t P

(Note that σt > 0 as Pr < 0). The expression µt (s, rt ) is generally termed the average instant return of the bond. Let us now consider two ﬁxed maturity dates s1 and s2 (> t) and apply an arbitrage reasoning by putting together, at the moment t, a portfolio consisting of: • The issue of a bond with maturity date s1 . • The purchase of X bonds with maturity date s2 . The X is chosen so that the portfolio does not contain any random components; the term involving dwt therefore has to disappear. The value of this portfolio at moment t is given by Vt = −Pt (s1 ) + XPt (s2 ), and the hypothesis of absence of opportunity for arbitrage allows us to express that the average return on this portfolio over the interval [t; t + dt] is given by the instant term rate rt : dVt = rt · dt + 0 · dwt Vt 29 30

Instead of r(t) as in Section 4.3, for ease of notation. See Appendix 2.

Bonds

141

By differentiating the value of the portfolio, we have: dVt = −Pt (s1 )(µt (s1 ) dt − σt (s1 ) dwt ) + X · Pt (s2 )(µt (s2 ) dt − σt (s2 ) dwt ) = [−Pt (s1 )µt (s1 ) + XPt (s2 )µt (s2 )] · dt + [Pt (s1 )σt (s1 ) − XPt (s2 )σt (s2 )] · dwt The arbitrage logic will therefore lead us to −Pt (s1 )µt (s1 ) + XPt (s2 )µt (s2 ) = rt −P (s ) + XP (s ) t

t

1

2

P (s )σ (s ) − XPt (s2 )σt (s2 ) t 1 t 1 =0 −Pt (s1 ) + XPt (s2 ) In other words:

XPt (s2 ) · (µt (s2 ) − rt ) = Pt (s1 ) · (µt (s1 ) − rt ) XPt (s2 ) · σt (s2 ) = Pt (s1 ) · σt (s1 )

We can eliminate X, for example by dividing the two equations member by member, which gives: µt (s1 ) − rt µt (s2 ) − rt = σt (s1 ) σt (s2 ) This shows that the expression λt (rt ) =

µt (s) − rt is independent of s; this expression σt (s)

is known as the market price of the risk. By replacing µt and σt with their value in the preceding relation, we arrive at Pt + (a + λb)Pr +

b2 P − rP = 0 2 rr

What we are looking at here is the partial derivatives equation of the second order, which together with the initial condition Ps (s, rt ) = l, deﬁnes the price process. This equation must be resolved for each speciﬁcation of a(t, rt ), b(t, rt ) and λt (rt ). 4.5.1.2 The Merton model31 Because of its historical interest,32 we are showing the simplest model, the Merton model. This model assumes that the instant term rate follows a random walk model: drt = α · dt + σ · dwt with α and σ being constant and the market price of risk being zero (λ = 0). The partial derivatives equation for the prices takes the form: Pt + αPr +

σ 2 P − rP = 0. 2 rr

31 Merton R., Theory of rational option pricing, Bell Journal of Economics and Management Science, Vol. 4, No. 1, 1973, pp. 141–83. 32 This is in fact the ﬁrst model based on representation of changes in the spot rate using a stochastic differential equation.

142

Asset and Risk Management

It is easy to verify that the solution to this equation (with the initial condition) is given by

α σ2 Pt (s, rt ) = exp −(s − t)rt − (s − t)2 + (s − t)3 2 6

The average instant return rate is given by

µt (s, rt ) =

Pt + αPr + P

σ 2 P 2 rr = rt · P = r t P

which shows that in this case, the average return is independent of the maturity date. The spot rate totals: Rt (s, rt ) = −

1 ln Pt (s, rt ) s−t

= rt +

σ2 α (s − t) − (s − t)2 2 6

This expression shows that the spot rate is close to the instant term rate in the short term, which is logical, but also (because of the third term) that it will invariably ﬁnish as a negative for distant maturity dates; this is much less logical. Note If one generalises the Merton model where the market price of risk λ is a strictly positive constant, we arrive at an average return µt that grows with the maturity date, but the inconvenience of the Rt spot rate remains. The Merton model, which is unrealistic, has now been replaced by models that are closer to reality; these models are covered in the next two paragraphs. 4.5.2 The Vasicek model33 In this model, the state variable rt develops according to an Ornstein–Uhlenbeck process drt = δ(θ − rt ) · dt + σ · dwt in which the parameters δ, θ and σ are strictly positive constants and the rate risk unit premium is also a strictly positive constant λt (rt ) = λ > 0. The essential property of the Ornstein–Uhlenbeck process is that the variable rt is ‘recalled’ back towards θ if it moves too far away and that δ represents the ‘force of recall’. Example Figure 4.6 shows a simulated trajectory (evolution over time) for such a process over 1000 very short time periods with the values δ = 100, θ = 0.1 and σ = 0.8 with a start value for r0 of 10 %. 33 Vasicek O., An equilibrium characterisation of the term structure, Journal of Financial Economics, Vol. 5, No. 2, 1977, pp. 177–88.

Bonds

143

0.3 0.25 0.2 0.15 0.1 0.05 0

1

101

201

301

401

501

601

701

801

901

1001

–0.05 –0.1 –0.15 –0.2

Figure 4.6 Ornstein–Uhlenbeck process

The partial derivatives equation for the price is shown here: Pt + (δ(θ − r) + λσ )Pr +

σ 2 P − rP = 0 2 rr

The solution to this equation and its initial condition is given by

k − rt σ2 Pt (s, rt ) = exp −k(s − t) + (1 − e−δ(s−t) ) − 3 (1 − e−δ(s−t) )2 δ 4δ where we have: k=θ+

σ2 λσ − 2 δ 2δ

The average instant return rate is given by:

µt (s, rt ) =

Pt + δ(θ − rt )Pr +

σ 2 P 2 rr

P rt · P − λσ · Pr = P λσ (1 − e−δ(s−t) ) = rt + δ

This average return increases depending on the maturity date, and presents a horizontal asymptote in the long term (Figure 4.7). The spot rate is given by: Rt (s, rt ) = −

1 ln Pt (s, rt ) s−t

=k−

σ2 k − rt (1 − e−δ(s−t) ) + 3 (1 − e−δ(s−t) )2 δ(s − t) 4δ (s − t)

144

Asset and Risk Management µt(s, rt) rt + λσ/δ

rt

t

s

Figure 4.7 The Vasicek model: average instant return

On one hand, this expression shows that the spot rate is stabilised for distant maturity dates and regardless of the initial value of the spot rate: lim Rt (s, rt ) = k. (s−t)−→+∞

On the other hand, depending on the current value of the spot rate in relation to the parameters, we can use this model to represent various movements of the yield curve. Depending on whether rt belongs to the

σ2 0; k − 2 , 4δ

σ2 σ2 k − 2;k + 2 , 4δ 2δ

σ2 k + 2 ; +∞ 2δ

we will obtain a rate curve that is increasing, humped or decreasing. Example Figure 4.8 shows spot-rate curves produced using the Vasicek model for the following parameter values: δ = 0.2, θ = 0.08, σ = 0.05 and λ = 0.02. The three curves correspond, from bottom to top, to r0 = 2 %, r0 = 6 % and r0 = 10 %. The Vasicek model, however, has two major inconvenients. On one hand, the Ornstein–Uhlenbeck process drt = δ(θ − rt ) · dt + σ · dwt , on the basis of which is constructed, sometimes, because of the second term, allows the instant term rate to assume negative values. On the other hand, the function of the spot rate that it generated, Rt (s), may in some case also assume negative values. 0.12 0.1 0.08 0.06 0.04 0.02 0 1

11

Figure 4.8 The Vasicek model: yield curves

21

31

41

51

Bonds

145

4.5.3 The Cox, Ingersoll and Ross model34 This model is part of a group also known as the equilibrium models as they are based on a macroeconomic type of reasoning, based in turn on the hypothesis that the consumer will show behaviour consistently aimed at maximising expected utility. These considerations, which we will not detail, lead (as do the other arbitrage models) to a speciﬁc deﬁnition of the stochastic process that governs the evolution of the instant term rate rt as well as the market price of risk λt (rt ). If within the Ornstein–Uhlenbeck process the second term is modiﬁed to produce drt = δ(θ − rt ) · dt + σ r α t · dwt , with α > 0, we will avoid the inconvenience mentioned earlier: the instant term rate can no longer become negative. In fact, as soon as it reaches zero, only the ﬁrst term will subsist and the variation in rates must therefore necessarily be upwards; the horizontal axis then operates as a ‘repulsing barrier’. Using the macroeconomic reasoning on which the Cox, Ingersoll and Ross model is based, we have a situation where α = 1/2 and the stochastic process is known as the square root process: √ drt = δ(θ − rt ) · dt + σ rt · dwt γ√ The same reasoning leads to a rate risk unit premium given by λt (rt ) = rt , where σ γ is a strictly positive constant; the market price of risk therefore increases together with the instant term rate in this case. Example Figure 4.9 represents a square root process with the parameters δ = 100, θ = 0.1 and σ = 0.8. The partial derivative equation for the price is given by Pt + (δ(θ − r) + γ r)Pr +

σ 2 rP − rP = 0 2 rr

0.16 0.14 0.12 0.1 0.08 0.06 0.04 0.02 0

1

101

201

301

401

501

601

701

801

901

1001

Figure 4.9 Square root process 34 Cox J., Ingersoll J. and Ross J., A theory of the term structure of interest rates, Econometrica, Vol. 53, No. 2, 1985, pp. 385–406.

146

Asset and Risk Management

The solution to this equation and its initial condition is given by Pt (s, rt ) = xt (s) · e−yt (s)r where we have:

2δθ2 1 σ (δ−γ +k)(s−t) 2ke 2 xt (s) = zt (s) 2(ek(s−t) − 1) y (s) = t zt (s) zt (s) = 2k + (δ − γ + k)(ek(s−t) − 1) k = (δ − γ )2 + 2σ 2

The average instant rate return is given by:

µt (s, rt ) =

Pt + δ(θ − rt )Pr +

P rt · P − γ rt · Pr = P = rt (1 + γ yt (s) )

σ 2 rP 2 rr

In this case, the average rate of return is proportional to the instant term rate. Finally, the spot rate is given by: 1 ln Pt (s, rt ) s−t 1 =− (ln xt (s) − rt yt (s)) s−t

Rt (s, rt ) = −

Example Figure 4.10 shows the spot-rate curves produced using the Cox, Ingersoll and Ross model for the following parameter values: δ = 0.2, θ = 0.08, σ = 0.05 and γ = 0.02. The three curves, from bottom to top, correspond to r0 = 2 %, r0 = 6 % and r0 = 10 %. 0.12 0.1 0.08 0.06 0.04 0.02 0 0

10

20

30

40

Figure 4.10 The Cox, Ingersoll and Ross model: rate curves

50

60

Bonds

147

Finally, we should point out that in contrast to the Vasicek model, the Cox, Ingersoll and Ross model can never produce a negative spot rate. In addition, as with the Vasicek model, the spot rate stabilises for distant maturity dates regardless of the initial value of the spot rate: 2δθ lim Rt (s, rt ) = (s−t)→+∞ δ−γ +k 4.5.4 Stochastic duration Finally, to end this section dedicated to random models, we turn to a generalisation of the concept of duration. Duration and convexity of rate products are ongoing techniques used to assess the sensitivity and alteration of the price of an asset following an alteration to its rate. Duration allows the variation in value to be estimated for more signiﬁcant variations. These concepts are used not only in bond portfolio management but also in asset and liability management in the context of immunisation of interest margins. What happens is that part of the balance-sheet margin is produced by a spread between the interest paid on assets (longterm deposits) and interest received on assets (the bank’s own portfolio with a ﬁxed income). This margin is immunised against variations in rate if convexity and duration are identical on both sides of the balance sheet. This identity of duration and convexity, also known as mutual support, does not necessarily mean that cash ﬂows are identical in assets and liabilities. In this case, distortion of the rate curve could lead to non-identical alterations in asset and liability values. The transition from a deterministic rate curve to a stochastic rate model provides the solution to this problem. The random evolution of rates allows the stochastic duration to be calculated. There are several stochastic rate models (see Sections 4.5.1–4.5.3), but the type most frequently used in ﬁnancial literature is the classical Vasicek model. 4.5.4.1 Random evolution of rates The classical Vasicek model is based on changes in the instant term rate governed by an Ornstein–Uhlenbeck process: drt = δ(θ − rt ) · dt + σ · dwt . The forward long rate r+∞f w is of course a function of the parameters δ, θ and σ of the model. Variations in a bond’s price depend on the values taken by the random variable rt and the alterations to the model’s parameters. The natural way of approaching stochastic duration is to adjust the parameters econometrically on rate curves observed. 4.5.4.2 Principle of mutual support The total variation in value at the initial moment t is obtained by developing Taylor in the ﬁrst order: dVt (s, rt , r+∞f w , σ ) = Vrt drt + Vr+∞f w dr+∞f w + Vσ dσ The principle of mutual support between assets and liabilities requires two restrictions to be respected: ﬁrst, equality of values between assets and liabilities: VA,t (s, rt , r+∞f w , σ ) = VL,t (s, rt , r+∞f w , σ )

148

Asset and Risk Management

and second, equality of total variations: regardless of what the increases in drt , dr+∞f w and dσ may be, we have dVA,t (s, rt , r+∞f w , σ ) = dVL,t (s, rt , r+∞f w , σ ) This second condition therefore requires that: (s, rt , r+∞f w , σ ) = VL,r (s, rt , r+∞f w , σ ) VA,r t t VA,r (s, rt , r+∞f w , σ ) = VL,r (s, rt , r+∞f w , σ ) +∞f w +∞f w VA,σ (s, rt , r+∞f w , σ ) = VL,σ (s, rt , r+∞f w , σ )

4.5.4.3 Extension of the concept of duration Generally speaking, it is possible to deﬁne the duration D that is a function of the variation in long and short rates: Dt (s, rt , r+∞f w ) =

1 (V + Vr+∞f w ) 2V rt

This expression allows us to ﬁnd the standard duration when the rate curve is deterministic and tends towards a constant curve with σ = 0 and θ = rt , with the initial instant term rate for the period t: 1 Dt (s, rt , r+∞f w ) = · Vrt V Generally, the duration is sensitive to the spread S between the short rate and the long rate. Sensitivity to spread allows the variation in value to be calculated for a spread variation: St (s, rt , r+∞f w ) =

1 1 (−Vrt + Vr+∞f w ) = · Vrt 2V V

In this case, if s is stable and considered to be a constant, the mutual support will correspond to the equality of stochastic distribution and of sensitivity of spread for assets and liabilities. The equality dVA,t (s, rt , r+∞f w ) = dVL,t (s, rt , r+∞f w ), valid whatever the increases in drt and dr+∞f w is equivalent to: DA = DL SA = SL

5 Options

5.1 DEFINITIONS 5.1.1 Characteristics An option 1 is a contract that confers on its purchaser, in return for a premium, the right to purchase or sell an asset (the underlying asset) on a future date at a price determined in advance (the exercise price of the option). Options for purchasing and options for selling are known respectively as call and put options. The range of assets to which options contracts can be applied is very wide: ordinary equities, bonds, exchange rates, commodities and even some derivative products such as FRAs, futures, swaps or options. An option always represents a right for the holder and an obligation to buy or sell for the issuer. This option right may be exercised when the contract expires (a European option) or on any date up to and including the expiry date (an American option). The holder of a call option will therefore exercise his option right if the price of the underlying equity exceeds the exercise price of the option or strike; conversely, a put option will be exercised in the opposite case. The assets studied in the two preceding chapters clearly show a degree of random behaviour (mean-variance theory for equities, interest-rate models for bonds). They do, however, also allow deterministic approaches (Gordon-Shapiro formula, duration and convexity). With options, the random aspect is much more intrinsic as everything depends on a decision linked to a future event. This type of contract can be a source of proﬁt (with risks linked to speculation) and a means of hedging. In this context, we will limit our discussion to European call options. Purchasing this type of option may lead to an attractive return, as when the price of the underlying equity on maturity is lower than the exercise price, the option will not be exercised and the loss will be limited to the price of the option (the premium). When the price of the underlying equity on maturity is higher than the exercise price, the underlying equity is received for a price lower than its value. The sale (issue) of an equity option, on the other hand, is a much more speculative operation. The proﬁt will be limited to the premium if the price of the underlying equity remains lower than the exercise price, while considerable losses may arise if the rise is higher than the price of the underlying equity. This operation should therefore only be envisaged if the issuer has absolute conﬁdence in a fall (or at worst a reduced rise) in the price of the underlying equity. Example Let us consider a call option on an equity with a current price of 100, a premium of 3 and an exercise price of 105. We will calculate the proﬁt made (or the loss suffered) 1 Colmant B. and Kleynen G., Gestion du risque de taux d’int´erˆet et instruments ﬁnanciers d´eriv´es, Kluwer, 1995. Hull J. C., Options, Futures and Other Derivatives, Prentice Hall, 1997. Hicks A., Foreign Exchange Options, Woodhead, 1993.

150

Asset and Risk Management Table 5.1 Proﬁt on option according to price of underlying equity Price of underlying equity

90 95 100 105 106 107 108 109 110 115 120

Gain Purchaser

Issuer

−3 −3 −3 −3 −2 −1 0 1 2 7 12

3 3 3 3 2 1 0 −1 −2 −7 −12

by the purchaser and by the issuer of the contract according to the price reached by the underlying equity on maturity. See Table 5.1. Of course, issuers who notice the price of the underlying equity rising during the contractual period can partly protect themselves by purchasing the same option and thus closing their position. Nevertheless, because of the higher underlying equity price, the premium for the option purchased may be considerably higher than that of the option that was issued. The price (premium) of an option depends on several different factors: • • • • •

the the the the the

price of the underlying equity St at the moment t (known as the spot); exercise price of the option K (known as the strike); duration T − t remaining until the option matures;2 volatility σR of the return on the underlying equity; risk-free rate RF .3

The various ways of specifying the function f (which will be termed C or P depending on whether a call or a put is involved) give rise to what are termed models of valuation. These are dealt with in Section 5.3. 5.1.2 Use The example shown in the preceding paragraph corresponds to the situation in which the purchaser can hope for an attractive gain. The proﬁt realised is shown in graphic form in Figure 5.1. This residual duration T − t is frequently referred to simply as τ This description corresponds, for example, to an option on an equity (with which we will mostly be dealing in this chapter). For an exchange option, the rate of interest will be divided in two, into domestic currency and foreign currency. In addition, this rate, which will be used in making updates, may be considered either discretely (discounting factor (1 + RF )−t ) or continuously (the notation r: e−rt will then be used). 2 3

Options

151

Profit

K ST

–p

Figure 5.1 Acquisition of a call option

Alongside this speculative aspect, the issue of a call option can become attractive if it is held along with the underlying equity. In fact, if the underlying equity price falls (or rises little), the loss suffered on that equity will be partly offset by receipt of the premium, whereas if the price rises greatly, the proﬁt that would have been realised will be limited to the price of the option plus the differential between the exercise price and the price of the underlying equity at the start of the contract. Example 1 Following the example shown above, we calculated the proﬁt realised (or the loss suffered) when the underlying equity alone is held and when it is covered by the call option (Table 5.2). Example 2 Let us now look at a more realistic example. A European company X often has invoices expressed in US dollars payable on delivery. The prices are of course ﬁxed at the moment of purchase (long before the delivery). If the rate for the dollar rises between the moment of purchase and the moment of delivery, the company X will suffer a loss if it purchases its dollars at the moment of payment. Table 5.2 option

Proﬁt/loss on equity covered by call

Price of underlying equity

90 95 100 105 106 107 108 109 110 115 120

Proﬁt/loss Purchaser

Issuer

−3 −3 −3 −3 −2 −1 0 1 2 7 12

3 3 3 3 2 1 0 −1 −2 −7 −12

152

Asset and Risk Management

Let us assume, more speciﬁcally, that the rate for the dollar at the moment t is St (US$1 = ¤St ) and that X purchases goods on this day (t = 0) valued at US$1000, the rate being S0 = x (US$1 = ¤x), for delivery in t = T . The company X, on t = 0, acquires 1000 European US$/¤ calls maturing on T , the exercise price being K = ¤x for US$1. If ST > x, the option will be exercised and X will purchase its dollars at rate x (the rate in which the invoice is expressed) and the company will lose only the total of the premium. If ST ≤ X, the option will not be exercised and X will purchase its dollars at the rate ST and the business will realise a proﬁt of 1000 · (x − ST ) less the premium. The purchase of the option acts as insurance cover against changes in rates. Of course it cannot be free of charge (consider the point of view of the option issuer); its price is the option premium. The case envisaged above corresponds to the acquisition of a call option. The same kind of reasoning can be applied to four situations, corresponding to the purchase or issue of a call option on one hand or of a put option on the other hand. Hence we have Figures 5.2 and 5.3. In addition to the simple cover strategy set out above, it is possible to create more complex combinations of subsequent equity, call options and put options. These more involved strategies are covered in Section 5.4.

Profit

p ST

K

Figure 5.2 Issue of a call option

Profit

Profit

p K ST –p

Figure 5.3 Acquisition and issue of a put option

K

ST

Options

153

5.2 VALUE OF AN OPTION 5.2.1 Intrinsic value and time value An option premium can be spilt into two terms: its intrinsic value and its time value. The intrinsic value of an option at a moment t is simply the proﬁt realised by the purchaser (without taking account of the premium) if the option was exercised at t. More speciﬁcally, for a call option it is the difference, if that difference is positive,4 between the price of the underlying equity St at that moment and the exercise price5 K of the option. If the difference is negative, the intrinsic value is by deﬁnition 0. For a put option, the intrinsic value will be the difference between the exercise price and the underlying equity price.6 Therefore, if the intrinsic value of the option is termed VI, we will have VI t = max (0, St − K) = (St − K)+ for a call option and VI t = max (0, K − St ) = (K − St )+ for a put option, with the graphs shown in Figure 5.4. The price of the option is of course at least equal to its intrinsic value. The part of the premium over and above the intrinsic value is termed time value and shown as VT, hence: V Tt = pt − VI t . This time value, which is added to the intrinsic value to give the premium, represents payment in anticipation of an additional proﬁt for the purchaser. From the point of view of the issuer, it therefore represents a kind of risk premium. The time value will of course decrease as the time left to run decreases, and ends by being cancelled out at the maturity date (see Figure 5.5). VIt (call)

VIt (put)

K

St

K

St

Figure 5.4 Intrinsic value of a call option and put option

VTt

T

t

Figure 5.5 Time value according to time 4 The option is then said to be ‘in the money’. If the difference is negative, the option is said to be ‘out of the money’. If the subjacent share price is equal or close to the exercise price, it is said to be ‘at the money’. These deﬁnitions are inverted for put options. 5 The option cannot in fact be exercised immediately unless it is of the American type. For a European option, the exercise price should normally be discounted for the period remaining until the maturity date. 6 This deﬁnition is given for an American option. For a European option, it is sufﬁcient, within the interpretation of St , to replace the price at the moment t by the maturity date price.

154

Asset and Risk Management VT

K

S

Figure 5.6 Time value according to underlying equity price

p OTM

ATM

ITM

VT VI K

S

Figure 5.7 Splitting of call option premium

It is easy to see, as the other parameters are constant, that the time value will be greater as the underlying equity price comes near to the exercise price, as shown in Figure 5.6. To understand this property, let us view things from the call issuer’s point of view to lay down the ideas. If the option is out of the money, it will probably not be exercised and the issuer may dispense with acquiring the underlying equity; his risk (steep rise in the underlying equity price) will therefore be low and he will receive very little reward. In the same way, an in-the-money option will probably be exercised, and the issuer will therefore have an interest in acquiring the underlying equity; a sharp drop in the underlying equity price represents a highly improbable risk and the time value will also be low. Conversely, for an at-the-money option the issuer will have no degree of certainty with regard to whether or not the option should be exercised, or how the underlying equity price will develop; the risk of the underlying equity price falling after he acquires the equity (or of a price surge without the underlying equity being acquired) is therefore high and a risk premium will be requested in consequence. This phenomenon is shown in Figure 5.7. In addition, it is evident that the longer the period remaining until the option contract matures, the higher the risk and the greater the time value (see Figure 5.8). Of course, the value of an option at maturity is identical to its intrinsic value: CT = (ST − K)+ PT = (K − ST )+ 5.2.2 Volatility Of the parameters that deﬁne the price of an option, let us now look more speciﬁcally at the volatility σR of the return of the underlying equity. The volatility of an option

Options

155

p

(a)

(b) K

S

Figure 5.8 Call premium and high (a) and brief (b) maturity

is deﬁned as a measurement of the dispersion of the return of the underlying equity. In practice, it is generally taken for a reference period of one year and expressed as a percentage. This concept of volatility can be seen from two points of view: historical volatility and implied volatility. Historical volatility is simply the annualised standard deviation on the underlying equity return, obtained from daily observations of the return in the past: n 1 σR = J · (Rt − R)2 n t=1 Here, the factor J represents the number of working days in the year; n is the number of observations and Rt is the return on the underlying equity. It is easy to calculate, but the major problem is that it is always ‘turned towards the past’ when it really needs to help analyse future developments in the option price. For this reason, the concept of implied volatility has been introduced. This involves using a valuation model to estimate the dispersion of the return of the underlying equity for the period remaining until the contract matures. The value of the option premium is determined in practice by the law of supply and demand. In addition, this law is linked to various factors through a binomial model of valuation: pt = f (St , K, T − t, σR , RF ) or through Black and Scholes (see Section 5.3). The resolution of this relation with respect to σR deﬁnes the implied volatility. Although the access is more complicated, this concept is preferable and it is this one that will often be used in practice. 5.2.3 Sensitivity parameters 5.2.3.1 ‘Greeks’ The premium is likely to vary when each of the parameters that determine the price of the option (spot price, exercise price, maturity etc.) change. The aim of this paragraph is to study the indices,7 known as ‘Greeks’, which measure the sensitivity of the premium to ﬂuctuations in some of these characteristics through the relation pt = f (St , K, τ, σR , RF ). 7 In the same way as duration and convexity, which measure the sensitivity of the value of a bond following changes in interest rates (see Chapter 4).

156

Asset and Risk Management

Here, we will restrict ourselves to examining the most commonly used sensitivity coefﬁcients: those that bring the option price and namely the underlying equity price time, volatility and risk-free rate into relation. In addition, the sign indications given are valid for a non-dividend-paying equity option. The coefﬁcient (delta) represents the sensitivity of the option price with respect to the underlying equity price. It is measured by dividing the variations in these two prices for a small increase δSt in the underlying equity price: =

f (St + δSt , K, τ, σR , RF ) − f (St , K, τ, σR , RF ) δSt

Or, more speciﬁcally: f (St + δSt , K, τ, σR , RF ) − f (St , K, τ, σR , RF ) δSt →0 δSt

= lim

= fS (St , K, τ, σR , RF ) Thus, for a call, if the underlying equity price increases by ¤1, the price of the option will increase by ¤. It will be between 0 and 1 for a call and between −1 and 0 for a put. Another coefﬁcient expresses the sensitivity of the option price with respect to the underlying equity price, but this time in the second order. This is the coefﬁcient (gamma), which is expressed by the ratio of variations in on one hand and the price St on the other hand. = fSS (St , K, τ, σR , RF ) If one wishes to compare the dependency of the option premium vis-`a-vis the underlying equity price and the price of a bond according to the actuarial rate, it can be said that is to the duration what is to convexity. This coefﬁcient , which is always positive, is the same for a call option and for a put option. The following coefﬁcient, termed (theta), measures the dependence of the option price according to time: = ft (St , K, T − t, σR , RF ) or, by introducing the residual life span τ = T − t of the contract, = −fτ (St , K, τ, σR , RF ) When the maturity date for the option contract is approaching, the value of the contract will diminish, implying that is generally negative. The coefﬁcient V (vega)8 measures the sensitivity of the option premium with respect to volatility: V = fσ (St , K, τ, σR , RF ) It is always positive and has the same value for a call and for a put. It is of course interpreted as follows: if the volatility increases by 1 %, the option price increases by V . 8

Also termed κ (kappa) on occasions – possibly because vega is not a Greek letter!

Options

157

Finally, the coefﬁcient ρ (rho) expresses the manner in which the option price depends on the risk-free rate RF : ρ = fR F (St , K, τ, σR , RF ) This coefﬁcient will be positive or negative depending on whether we are dealing with a call or a put. 5.2.3.2 ‘Delta hedging’ As these coefﬁcients have now been deﬁned, we can move onto an interesting interpretation of the delta. This element plays its part in hedging a short-term position (issue) of a call option (referred to as ‘delta hedging’). The question is: how many units of the underlying equity must the issuer of a call acquire in order to hedge his position? This quantity is referred to as X. Although the current value of the underlying equity is X, the value of its portfolio, consisting of the purchase of X units of the underlying equity and the issue of one call on that equity, is: V (S) = X · S − C(S) If the price of the underlying equity changes from S to S + δS, the value of the portfolio changes to: V (S + δS) = X · (S + δS) − C(S + δS) As ≈

C(S + δS) − C(S) , the new value of the portfolio is: δS V (S + δS) = X · (S + δS) − [C(S) + · δS] = X · S − C(S) + (X − ) · δS = V (S) + (X − ) · δS

The position will therefore be hedged against a movement (up or down) of the underlying equity price if the second term is zero (X = ), that is, if the issuer of the call holds units in the underlying equity. 5.2.4 General properties 5.2.4.1 Call–put parity relation for European options We will now draw up the relation that links a European call premium and a European put premium, both relating to the same underlying equity and both with the same exercise price and maturity date: this is termed the ‘call–put parity relation’. We will establish this relation for a European equity option that does not distribute a dividend during the option contract period. Let us consider a portfolio put together at moment t with: • the purchase of the underlying equity, whose value is St ; • the purchase of a put on this underlying equity, with exercise price K and maturity T ; its value is therefore Pt (St , K, τ, σR , RF );

158

Asset and Risk Management

• the sale of a call on the same underlying equity, with exercise price K and maturity T ; its value is therefore Ct (St , K, τ, σR , RF ); • the borrowing (at risk-free rate RF ) of a total worth K at time T ; the amount is therefore K · (1 + RF )−τ . The value of the portfolio at maturity T will be ST + PT − CT − K. As we have shown previously that CT = (ST − K)+ and that PT (K − ST )+ , this value at maturity will equal: if ST > K,

ST + 0 − (ST − K) − K = 0

if ST ≤ K,

ST + (K − ST ) − 0 − K = 0

This portfolio, regardless of changes to the value of the underlying equity between t and T and for constant K and RF , has a zero value at moment T . Because of the hypothesis of absence of arbitrage opportunity,9 the portfolio can only have a zero value at moment t. The zero value of this portfolio at moment t is expressed by: St + Pt − Ct − K · (1 + RF )−τ = 0. Or, in a more classic way, by: Ct + K · (1 + RF )−τ = Pt + St This is the relation of parity declared. Note The ‘call–put’ parity relation is not valid for an exchange option because of the interest rate spread between the two currencies. If the risk-free interest rate for the domestic currency and that of the foreign currency are referred to as RF(D) and RF(F ) (they are assumed to be constant and valid for any maturity date), it is easy to see that the parity relation will take the form Ct + K · (1 + RF(D) )−τ = Pt + St · (1 + RF(F ) )−τ . 5.2.4.2 Relation between European call and American call Let us now establish the relation that links a European call to an American call, both for the same underlying equity and with the same exercise price and maturity date. As with the parity relation, we will deal only with equity options that do not distribute a dividend during the option contract period. As the American option can be exercised at any moment prior to maturity, its value will always be at least equal to the value of the European option with the same characteristics: Ct(a) (St , K, T − t, σR , RF ) ≥ Ct(e) (St , K, T − t, σR , RF ) The parity relation allows the following to be written in succession: Ct(e) + K · (1 + RF )−τ = Pt(e) + St Ct(e) ≥ St − K · (1 + RF )−τ > St − K Ct(a) ≥ Ct(e) > (St − K)+ 9

Remember that no ﬁnancial movement has occurred between t and T as we have excluded the payment of dividends.

Options

159

As (St − K)+ represents what the American call would return if exercised at moment t, its holder will be best advised to retain it until moment T . At all times, therefore, this option will have the same value as the corresponding European option: Ct(a) = Ct(e)

∀t ∈ [0; T ]

We would point out that the identity between the American and European calls does not apply to puts or to other kinds of option (such as exchange options). 5.2.4.3 Inequalities on price The values of calls and puts obey the following inequalities: [St − K(1 + RF )−τ ]+ ≤ Ct ≤ St [K(1 + RF )−τ − St ]+ ≤ Pt(e) ≤ K(1 + RF )−τ [K − St ]+ ≤ Pt(a) ≤ K These inequalities limit the area in which the graph for the option according to the underlying equity price can be located. This leads to Figure 5.9 for a European or American call and Figure 5.10 for puts. The right-hand inequalities are obvious: they state simply that an option cannot be worth more than the gain it allows. A call cannot therefore be worth more than the underlying equity whose acquisition it allows. In the same way, a put cannot be worth more than the

Ct

K(1 + RF)–τ

St

Figure 5.9 Inequalities for a call value

P(e) t

P(a) t

K K(1 + RF)–τ

K(1 + RF)–τ

St

K

Figure 5.10 Inequalities for the value of a European put and an American put

St

160

Asset and Risk Management

exercise price K at which it allows the underlying equity to be sold; and for a European put, it cannot exceed the discounted value of the exercise price in question (the exercise can only occur on the maturity date). Let us now justify the left-hand inequality for a call. To do this, we set up at moment t a portfolio consisting of: • the purchase of one call; • a risk-free ﬁnancial investment worth K at maturity: K(1 + RF )−τ ; • the sale of one unit of the underlying equity. Its value at moment t will of course be: Vt = Ct + K(1 + RF )−τ − St . Its value on maturity will depend on the evolution of the underlying equity: if ST > K, (ST − K) + K − ST = 0 VT = if ST ≤ K, 0 + K − ST In other words, VT = (K − ST )+ , which is not negative for any of the possible evolution scenarios. In the absence of arbitrage opportunity, we also have Vt ≥ 0, that is, Ct ≥ St − K(1 + RF )−τ . As the price of the option cannot be negative, we have the inequality declared. The left-hand inequality for a European put is obtained in the same way, by arbitragebased logic using the portfolio consisting of: • the purchase of one put; • the purchase of one underlying equity unit; • the purchase of an amount worth K at maturity: K(1 + RF )−τ . The left-hand inequality for an American put arises from the inequality for a European put. It should be noted that there is no need to discount the exercise price as the moment at which the option right will be exercised is unknown.

5.3 VALUATION MODELS Before touching on the developed methods for determining the value of an option, we will show the basic principles for establishing option pricing using an example that has been deliberately simpliﬁed as much as possible. Example Consider a European call option on the US$/¤ exchange rate for which the exercise price is K = 1. Then suppose that at the present time (t = 0) the rate is S0 = 0.95 (US$1 = ¤0.95). We will be working with a zero risk-free rate (RF = 0) in order to simplify the developments. Let us suppose also that the random changes in the underlying equity between moments t = 0 and t = T can correspond to two scenarios s1 and s2 for which ST is ¤1.1 and ¤0.9 respectively, and that the scenarios occur with the respective probabilities of 0.6 and 0.4. 1.1 Pr(s1 ) = 0.6 ST = 0.9 Pr(s2 ) = 0.4

Options

161

The changes in the exchange option, which is also random, can therefore be described as: 0.1 Pr(s1 ) = 0.6 CT = 0.0 Pr(s2 ) = 0.4 Let us consider that at moment t = 0, we have a portfolio consisting of: • the issue of a US$/¤ call (at the initial price of C0 ); • a loan of ¤X; • the purchase of US$Y . so that: • The initial value V0 of the portfolio is zero: the purchase of the US$Y is made exactly with what is generated by the issue of the call and the loan. • The portfolio is risk-free, and will undergo the same evolution whatever the scenario (in fact, its value will not change as we have assumed RF to be zero). The initial value of the portfolio in ¤ is therefore V0 = −C0 − X + 0.95Y = 0. Depending on the scenario, the ﬁnal value will be given by: VT (s1 ) = −0.1 − X + 1.1 · Y VT (s2 ) = −X + 0.9 · Y The hypothesis of absence of opportunity for arbitrage allows conﬁrmation that VT (s1 ) = VT (s2 ) = 0 and the consequent deduction of the following values: X = 0.45 and Y = 0.5. On the basis of the initial value of the portfolio, the initial value of the option is therefore deduced: C0 = −X + 0.95Y = 0.025 It is important to note that this value is totally independent of the probabilities 0.6 and 0.4 associated with the two development scenarios for the underlying equity price, otherwise we would have C0 = 0.1 × 0.6 + 0 × 0.4 = 0.06. If now we determine another law of probability Pr(s1 ) = q

Pr(s2 ) = 1 − q

for which C0 = Eq (CT ), we have 0.025 = 0.01 · q + 0 · (1 − q), that is: q = 0.25. We are in fact looking at the law of probability for which S0 = Eq (ST ): Eq (ST ) = 1.1 · 0.25 + 0.9 · 0.75 = 0.95 = S0

162

Asset and Risk Management

We have therefore seen, in a very speciﬁc case where there is a need for generalisation, that the current value of the option is equal to the mathematical expectation of its future value, with respect to the law of probability for which the current value of the underlying equity is equal to the expectation of its future value.10 This law of probability is known as the risk-neutral probability. 5.3.1 Binomial model for equity options This model was produced by Cox, Ross and Rubinstein.11 In this discrete model we look simply at a list of times 0, 1, 2, . . . , T separated by a unit of time (the period), which is usually quite short. Placing ourselves in a perfect market, we envisage a European equity option that does not distribute any dividends during the contract period and with a constant volatility during the period in question. In addition, we assume that the risk-free interest does not change during this period, that it is valid for any maturity (ﬂat, constant yield curve), and that it is the same for a loan and for an investment. This interest rate, termed RF , will be expressed according to a duration equal to a period; and the same will apply for other parameters (return, volatility etc.). Remember (Section 3.4.2) that the change in the underlying equity value from one time to another is dichotomous in nature: equity has at moment t the value St , but at the next moment t + 1 will have one of the two values St · u (greater than St ) or St · d (less than St ) with respective probabilities of α and (1 − α). We have d ≤ 1 < 1 + RF ≤ u and the parameters u, d and α, which are assumed to be constant over time, should be estimated on the basis of observations. We therefore have the following graphic representation of the development in equity prices for a period: S = St · u (α) −−→ t+1 − St − −−→ St+1 = St · d (1 − α) and therefore, more generally speaking, as shown in Figure 5.11. Now let us address the issue of evaluating options at the initial moment. Our reasoning will be applied to a call option. It is known that the value of the option at the end of the contract will be expressed according to the value of the equity by CT = (ST − K)+ .

S0 • u S0 S0 • d

S0 • u2 S0 • ud S0 • d2

S0 • u3

…

2

S0 • u d … S0 • ud2 … S0 • d2

…

Figure 5.11 Binomial tree for underlying equity 10

When the risk-free rate is zero, remember. Cox J., Ross S. and Rubinstein M., Option pricing: a simpliﬁed approach, Journal of Financial Economics, No. 7, 1979, pp. 229–63 11

Options

163

After constructing the tree diagram for the equity from moment 0 to moment T , we will now construct the tree from T to 0 for the option, from each of the ends in the equity tree diagram, to reconstruct the value C0 of the option at 0. This reasoning will be applied in stages. 5.3.1.1 One period Assume that T = 1. From the equity tree diagram it can be clearly seen that the call C0 (unknown) can evolve into two values with the respective probabilities of α and (1 − α): C = C(u) = (S0 · u − K)+ −−→ 1 − C0− −−→ C1 = C(d) = (S0 · d − K)+ As the value of C1 (that is, the value of C(u) and C(d)) is known, we will now determine the value of C0 . To do this, we will construct a portfolio put together at t = 0 by: • the purchase of X underlying equities with a value of S0 ; • the sale of one call on this underlying equity, with a value of C0 . The value V0 of this portfolio, and its evolution V1 in the context described, are given by: V −−→ 1 − V0 = X · S0 − C0− −−→ V1

=

X · S0 · u − C(u)

=

X · S0 · d − C(d)

We then choose X so that the portfolio is risk-free (the two values of V1 will then be identical). The hypothesis of absence of arbitrage opportunity shows that in this case, the return on this portfolio must be given by the risk-free rate RF . We therefore obtain: V1 = X · S0 · u − C(u) = X · S0 · d − C(d) V1 = (X · S0 − C0 )(1 + RF ) The ﬁrst equation readily provides: X · S0 = and therefore: V1 =

C(u) − C(d) u−d

d · C(u) − u · C(d) u−d

The second equation then provides: d · C(u) − u · C(d) = u−d

C(u) − C(d) − C0 (1 + RF ) u−d

164

Asset and Risk Management

This easily resolves with respect to C0 : −1

C0 = (1 + RF )

u − (1 + RF ) (1 + RF ) − d C(u) + C(d) u−d u−d

The coefﬁcients for C(u) and C(d) are clearly between 0 and 1 and total 1. We therefore introduce: (1 + RF ) − d u − (1 + RF ) q= 1−q = u−d u−d They constitute the neutral risk law of probability. We therefore have the value of the original call: C0 = (1 + RF )−1 [q · C(u) + (1 − q) · C(d)] Note 1 As was noted in the introductory example, the probability of growth α is not featured in the above relation. The only law of probability involved is the one relating to the risk-neutral probability q, with respect to which C0 appears as the discounted value of the average value of the call at maturity (t = 1). The term ‘risk-neutral probability’ is based on the logic that the expected value of the underlying equity at maturity (t = 1) with respect to this law of probability is given by: Eq (S1 ) = q · S0 · u + (1 − q) · S0 · d

u − (1 + RF ) (1 + RF ) − d u+ d = S0 u−d u−d = S0 (1 + RF ) The change in the risk-free security is the same as the expected change in the risked security (for this law of probability). Note 2 When using the binomial model practically, it is simpler to apply the reasoning with respect to one single period for each node on the tree diagram, progressing from T to 0. We will, however, push this analysis further in order to obtain a general result. 5.3.1.2 Two periods Let us now suppose that T = 2. The binomial tree diagram for the option will now be written as: 2 + −→C2 = C(u, u) = (S0 · u − K) − − − C = C(u) − 1 −→ −−→ − C2 = C(u, d) = C(d, u) = (S0 · ud − K)+ C0− −−→ → − − C1 = C(d) − − − −→ C2 = C(d, d) = (S0 · d 2 − K)+

Options

165

The previous reasoning will allow transition from time 2 to time 1: C(u) = (1 + RF )−1 [q · C(u, u) + (1 − q) · C(u, d)] C(d) = (1 + RF )−1 [q · C(d, u) + (1 − q) · C(d, d)] And from time 1 to time 0: C0 = (1 + RF )−1 q · C(u) + (1 − q) · C(d) = (1 + RF )−2 q 2 · C(u, u) + 2q(1 − q) · C(u, d) + (1 − q)2 · C(d, d) Consideration of the coefﬁcients for C(u,u), C(u,d) and C(d, d) allows the above note to be speciﬁed: C0 is the discounted value for the expected value of the call on maturity (t = 2) with respect to a binomial law of probability12 for parameters (2; q). 5.3.1.3 T periods To generalise what has already been said, it is seen that C0 is the discounted value of the expected value of the call on maturity (t = T ) with respect to a binomial law of probability for parameters (T ; q). We can therefore write: C0 = (1 + RF )−T

T T j =0

= (1 + RF )−T

T j =0

j

T j

q j (1 − q)T −j C(u, . . . , u, d, . . . , d )

j

T −j

q j (1 − q)T −j (S0 uj d T −j − K)+

As uj d T −j is an increasing function of j , if one introduces j = min {j :S0 uj d T −j − K > 0}, ln K − ln(S0 d T ) , the evaluation of that is, the smallest value of j that is strictly higher than ln u − ln d the call takes the form: T T −T q j (1 − q)T −j (S0 uj d T −j − K) C0 = (1 + RF ) j j =J

= S0

T j =J

T j

uq 1 + RF

j

d(1 − q) 1 + RF

T −j

−T

− K(1 + RF )

T T j =J

j

q j (1 − q)T −j

Because: d(1 − q) u[(1 + RF ) − d] + d[u − (1 + RF )] uq + = =1 1 + RF 1 + RF (1 + RF )(u − d) we introduce: q = 12

See Appendix 2.

uq 1 + RF

1 − q =

d(1 − q) 1 + RF

166

Asset and Risk Management

By introducing the notation B(n; p) for a binomial random variable with parameters (n, p), we can therefore write: C0 = S0 · Pr B(T ; q ) ≥ J − K(1 + RF )−T · Pr B(T ; q) ≥ J The ‘call–put’ parity relation C0 + K(1 + RF )−T = P0 + S0 allows the evaluation formula to be obtained immediately for the put with the same characteristics: P0 = −S0 · Pr[B(T ; q ) < J ] + K(1 + RF )−T · Pr[B(T ; q) < J ] Note The parameters u and d are determined, for example, on the basis of the volatility σR of the return of the underlying equity. In fact, as the return relative to a period takes the values (u − 1) or (d − 1) with the respective probabilities α and (1 − α), we have: ER = α(u − 1) + (1 − α)(d − 1) σR2 = α(u − 1)2 + (1 − α)(d − 1)2 − [α(u − 1) + (1 − α)(d − 1)]2 = α(1 − α)(u − d)2 By choosing α = 1/2, we arrive at u − d = 2σR . Cox, Ross and Rubinstein suggest taking d = 1/u, which leads to an easily solved second-degree equation or, with a Talor approximation, u = eσR and d = e−σR . Example Let us consider a call option of seven months’ duration, relating to an equity with a current value of ¤100 and an exercise price of ¤110. It is assumed that its volatility is σR = 0.25, calculated on an annual basis, and that the risk-free rate is 4 % per annum. We will assess the value of this call at t = 0 by constructing a binomial tree diagram with the month as the basic period. The equivalent volatility and risk-free rate as given by:

1 · 0.25 = 0.07219 12 √ 12 RF = 1.04 − 1 = 0.003274 σR =

We therefore have u − 1/u = 0.1443, for which the only positive root is13 u = 1.07477 (and therefore d = 0.93043). The risk-neutral probability is: q=

1.003274 − 0.93043 = 0.5047 1.07477 − 0.93043

13 If we had chosen α = 1/3 instead of 1/2, we would have found that u = 1.0795, that is, a relatively small difference; the estimation of u therefore only depends relatively little on α.

Options

167

Let us ﬁrst show the practical method of working: the construction of two binomial tree diagrams (forward for the equity and backward for the bond). For example, we have for the two values of S1 : S0 · u = 100 · 1.07477 = 107.477 S0 · d = 100 · 0.93043 = 93.043 The binomial tree for the underlying equity is shown in Table 5.3. The binomial tree diagram for the option is constructed backwards. The last column is therefore constructed on the basis of the relation CT = (ST − K)+ . The ﬁrst component of this column is max (165.656 − 110; 0) = 55.656, and the elements in the preceding columns can be deduced from it, for example: 1 [0.5047 · 55.656 + 0.4953 · 33.409] = 44.491 1.003274 This gives us Table 5.4. The initial value of the call is therefore C0 = ¤4.657. Let us now show the calculation of the value of the option based on the ﬁnal formula. The auxiliary probability is given by: q =

Table 5.3 0 100

1.07477 · 0.5047 = 0.5406 1.003274

Binomial tree for underlying equity 1

2

3

4

5

6

7

107.477 93.043

115.513 100.000 86.570

124.150 107.477 93.043 80.548

133.432 115.513 100.000 86.570 74.944

143.409 124.150 107.477 93.043 80.548 69.731

154.132 133.432 115.513 100.000 86.570 74.944 64.880

165.656 143.409 124.150 107.477 93.043 80.548 69.731 60.366

Table 5.4 0 4.657

Binomial tree for option 1

2

3

4

5

6

7

7.401 1.891

11.462 3.312 0.456

17.196 5.696 0.906 0

24.809 9.555 1.801 0 0

34.126 15.482 3.580 0 0 0

44.491 23.791 7.118 0 0 0 0

55.656 33.409 14.150 0 0 0 0 0

168

Asset and Risk Management

In addition, as calculate:

ln 110 − ln(100 · d 7 ) = 4.1609, we ﬁnd that J = 5. This will allow us to ln u − ln d

7 7 7 5 2 6 p (1 − p) + p7 p (1 − p) + Pr[B(7; p) ≥ 5] = 6 7 5 = p 5 (21 − 35p + 15p 2 ) and therefore: Pr[B(7; q) ≥ 5] = 0.2343 and Pr[B(7; q ) ≥ 5] = 0.2984. The price of the call therefore equals: C0 = 100 · 0.2984 − 110 · (1 + RF )−7 · 0.2343 = 4.657. Meanwhile, the premium for the put with the same characteristics is: P0 = 100 · (1 − 0.2984) + 100 · (1 + RF )−7 · (1 − 0.2343) = 12.168 Note that it is logical for the price of the put to be higher than that of the call, as the option is currently ‘out of the money’. 5.3.1.4 Taking account of dividends We have assumed until now that the underlying equity does not pay a dividend. Let us now examine a case in which dividends are paid. If only one dividend is paid during the i th period (interval [i − 1; i]), and the rate of the dividend is termed δ (ratio of the dividend amount to the value of the security), the value of the security will be reduced to the rate δ when the dividend is paid and the binomial tree diagram for the underlying equity must therefore be modiﬁed as follows: • up to the time (i − 1), no change: the values carried by the nodes in the tree diagram for the period j ≤ i − 1 will be S0 uk d j −k (k = 0, . . . , j ); • from the time i onwards (let us say for j ≥ i), the values become14 S0 (1 − δ)uk d j −k (k = 0, . . . , j ); • the tree diagram for the option is constructed in the classic backward style from that point; • if several dividends are paid at various times during the option contract, the procedure described above must be applied whenever a payment is made. 5.3.2 Black and Scholes model for equity options We now develop the well-known continuous time model compiled by Black and Scholes.15 In this model the option, concluded at moment 0 and maturing at moment T , can be evaluated at any moment t ∈ [0; T ], and as usual, we note τ = T − t. We further assume that the risk-free rate of interest does not change during this period, that it is valid for any maturity date (ﬂat and constant yield curve) and that it is the same 14 This means that when the tree diagram is constructed purely numerically, taking account of the factor (1 − δ) will only be effective for the passage from the time i − 1 to the time i. 15 Black F. and Scholes M., The pricing of options and corporate liabilities, Journal of Political Economy, Vol. 81, 1973, pp. 637–59.

Options

169

for an investment as for a loan. The annual rate of interest, termed RF up until now, is replaced in this continuous model by the corresponding instant rate r = ln(1 + RF ), so that a unitary total invested during a period of t years becomes (1 + RF )t = ert . Remember (see Section 3.4.2) that the evolution of the underlying equity value is governed by the stochastic differential equation dSt = ER · dt + σR · dwt St We will initially establish16 the Black and Scholes formula for a call option the value of which is considered to be a function of the value St of the underlying equity and of time t, the other parameters being considered to be constant: Ct = C(St , t). By applying Itˆo’s formula to the function C(St , t), we obtain: σ2 · dt + σR St CS · dwt dC(St , t) = Ct + ER St CS + R St2 CSS 2 Let us now put together a portfolio that at moment t consists of: • the purchase of X underlying equities with a value of St ; • the sale of one call on then underlying equity, with value C(St , t). The value Vt of this portfolio is given by Vt = X · St − C(St , t). This, by differentiation, gives: dVt = X · [ER St · dt + σR St · dwt ] −

Ct + ER St CS +

σR2 2 St CSS · dt + σR St CS · dwt 2

σ2 · dt + [X · σR St − σR St CS ] · dwt = X · ER St − Ct + ER St CS + R St2 CSS 2

We then choose X so that the portfolio no longer has any random components (the coefﬁcient of dwt in the preceding relation must be zero). The hypothesis of absence of arbitrage opportunity shows that if possible, the return on the portfolio should be given by the risk-free rate r: dVt = r · dt + 0 · dwt Vt We therefore arrive at: σR2 2 S − C + E S C + C S X · E R t R t S t 2 t SS =r X · St − C(St , t) X · σR St − σR St CS =0 X · St − C(St , t) 16 We will only develop the ﬁnancial part of the logic, as the end of the demonstration is purely analytical. Readers interested ´ in details of calculations can consult the original literature or Devolder P., Finance Stochastique, Editions de l’ULB, 1993.

170

Asset and Risk Management

or, in the same way: σR2 2 X · (ER − r)St − Ct + ER St CS + S C − rC(St , t) = 0 2 t SS X − CS = 0 The second equation provides the value of X, which cancels out the random component of the portfolio: X = C S . By making a substitution in the ﬁrst equation, we ﬁnd: (ER − r)St ·

CS

−

Ct

+

ER St CS

In other words: Ct + rSt CS +

σR2 2 + S C − rC(St , t) = 0 2 t SS

σR2 2 S C − rC(St , t) = 0 2 t SS

In this equation, the instant mean return ER has disappeared.17 We are looking at a partial derivative equation (in which none of the elements are now random) of the second order for the unknown function C(St , t). It allows a single solution if two limit conditions are imposed: C(0, t) = 0 C(ST , T ) = (ST − K)+ Through a change in variables, this equation can be turned into an equation well known to physicists: the heat equation.18 It is in fact easy, although demanding, to see that if the new unknown function u(x, s) = C(St , t)ert is introduced where the change of variables occurs: σ 2 (x − s) R = K · exp S t 2 σ 2 r− R 2 2 sσ R t =T − 2 σR2 2 r− 2 which inverts to:

σR2 σR2 St 2 x = σ 2 r − 2 · ln K + r − 2 τ R 2 σR2 2 τ r − s = 2 σR2 17 In the same way as the independence of the result obtained by the binomial model with respect to the probability α governing the evolution of the subjacent share price was noted. 18 See for example: Krasnov M., Kisilev A., Makarenko G. and Chikin E., Math´ematique sup´erieures pour ing´enieurs et polytechniciens, De Boeck, 1993. Also: Sokolnikov I. S. and Redheffer R. M., Mathematics of Physics and Modern Engineering, McGraw-Hill, 1966.

Options

The equation obtained turns into: uxx = us . With the conditions limit: lim u(x, s) = 0 x→−∞ σR2 x 2 K · exp u(x, 0) = v(x) = σR2 r − 2 0

171

− 1

if x ≥ 0 if x < 0

this heat equation has the solution: 1 u(x, s) = √ 2 πs

"

+∞ −∞

v(y)e−(x−y)

2

/4s

dy

By making the calculations with the speciﬁc expression of v(y), and then making the inverse change of variables, we obtain the Black and Scholes formula for the call option C(St , t) = St (d1 ) − Ke−rτ (d2 ), where we have: d1 d2

#

ln =

St σ2 + r± R τ K 2 √ σR τ

and the function represents the standard normal distribution function: 1 (t) = √ 2π

"t

e−x

2

/2

dx

−∞

The price Pt of a put option can be evaluated on the basis of the price of the call option, thanks to the relation of ‘call–put’ parity: Ct + K · e−rT = Pt + St . In fact: P (St , t) = C(St , t) + Ke−rτ − St = St (d1 ) − Ke−rτ (d2 ) + Ke−rτ − St = −St [1 − (d1 )] + Ke−rτ [1 − (d2 )] and therefore:

P (St , t) = −St (−d1 ) + Ke−rτ (−d2 )

because: 1 − (t) = (−t) Example Consider an option with the same characteristics as in Section 5.3.1: S0 = 100, K = 110, t = 0, T = 7 months, σR = 0.25 on an annual basis and RF = 4 % per year.

172

Asset and Risk Management

We are working with the year as the time basis, so that: τ = 7/12, r = ln 1.04 = 0.03922.

d1 d2

#

ln =

0.252 7 100 + 0.03922 ± · 110 2 12 −0.2839 = −0.4748 7 0.25 · 12

Hence (d1 ) = 0.3823 and (d2 ) = 0.3175. This allows the price of the call to be calculated: 7

C = C(S0 , 0) = 100 · (d1 ) − 110 · e−0.03922· 12 · (d2 ) = 4.695 As the put premium has the same characteristics, it totals: 7

P = P (S0 , 0) = −100 · [1 − (d1 )] + 110 · e−0.03922· 12 · [1 − (d2 )] = 12.207 The similarity of these ﬁgures to the values obtained using the binomial model (4.657 and 12.168 respectively) will be noted.

5.3.2.2 Sensitivity parameters When the price of an option is calculated using the Black and Scholes formula, the sensitivity parameters or ‘Greeks’ take on a practical form. Let us examine ﬁrst the case of a call option delta. If the reduced normal density is termed φ 1 2 φ(x) = (x) = √ e−x /2 2π we arrive, by derivation, at: (C) = CS = (d1 ) +

1

√

St σR τ

St φ(d1 ) − Ke−rτ φ(d2 )

It is easy to see that the quantity between the square brackets is zero and that therefore (C) = (d1 ), and that by following a very similar logic, we will arrive at a put of: (P ) = (d1 ) − 1. The above formula provides a very simple means of determining the number of equities that should be held by a call issuer to hedge his risk (the delta hedging). This is a common use of the Black and Scholes relation: the price of an option is determined by the law of supply and demand and its ‘inversion’ provides the implied volatility. The latter is therefore used in the relation (C) = (d1 ), which is then known as the hedging formula.

Options

173

The other sensitivity parameters (gamma, theta, vega and rho) are obtained in a similar way: φ(d1 ) (C) = (P ) = √ St σR τ St σR φ(d1 ) − rKe−rτ (d2 ) (C) = − 2√τ S σ φ(d ) (P ) = − t R√ 1 + rKe−rτ (−d2 ) 2 τ V (C) = V (P ) = τ St φ(d1 ) ρ(C) = τ Ke−rτ (d2 ) ρ(C) = −τ Ke−rτ (−d2 ) In ﬁnishing, let us mention a relationship that links the delta, gamma and theta parameters. The partial derivative equation obtained during the demonstration of the Black and Scholes formula for a call is also valid for a put (the price therefore being referred to as p without being speciﬁed): σ2 pt + rSt pS + R St2 pSS − rp(St , t) = 0 2 This, using the sensitivity parameters, will give: + rSt +

σR2 2 S =r ·p 2 t

5.3.2.3 Taking account of dividends If a continuous rate dividend19 δ is paid between t and T and the underlying equity is worth St (resp. ST ) at the moment t (resp. T ), it can be said that had it not paid a dividend, it would have passed from value St to value eδτ ST . It can also be said that the same equity without dividend would pass from value e−δτ St at moment t to value ST at moment T . In order to take account of the dividend, therefore, it will sufﬁce within the Black and Scholes formula to replace St by e−δτ St , thus giving: C(St , t) = St e−δτ (d1 ) − Ke−rτ (d2 ) P (St , t) = −St e−δτ (−d1 ) + Ke−rτ (−d2 ). where, we have:

19

σR2 St # ln + r − δ ± τ K 2 d1 = √ d2 σR τ

An discounting/capitalisation factor of the exponential type is used here and throughout this paragraph.

174

Asset and Risk Management

5.3.3 Other models of valuation 5.3.3.1 Options on bonds It is not enough to apply the methods shown above (binomial tree diagram or Black and Scholes formula) to options on bonds. In fact: • Account must be taken of coupons regularly paid. • The constancy of the underlying equity volatility (a valid hypothesis for equities) does not apply in the case of bonds as their values on maturity converge towards the repayment value R. The binomial model can be adapted to suit this situation, but is not an obvious generalisation of the method set out above.20 Adapting the Black and Scholes model consists of replacing the geometric Brownian motion that represents the changes in the value of the equity with a stochastic process that governs the changes in interest rates, such as those used as the basic for the Vasicek and Cox, Ingersoll and Ross models (see Section 4.5). Unfortunately, the partial derivatives equation deduced therefrom does not generally allow an analytical solution and numeric solutions therefore have to be used.21 5.3.3.2 Exchange options For an exchange option, two risk-free rates have to be taken into consideration: one relative to the domestic currency and one relative to the foreign currency. For the discrete model, these two rates are referred to respectively as RF(D) and RF(F ) . By altering the logic of Section 5.3.1 using this generalisation, it is possible to determine the price of an exchange option using the binomial tree diagram technique. It will be seen that the principle set out above remains valid with a slight alteration of the close formulae: C0 is the discounted expected value of the call on maturity (for a period): C0 = (1 + RF(D) )−1 q · C(u) + (1 − q) · C(d) with the neutral risk probability: % $ 1 + (RF(D) − RF(F ) ) − d q= u−d

1−q =

$ % u − 1 + (RF(D) − RF(F ) ) u−d

For the continuous model, the interest rates in the domestic and foreign currencies are referred to respectively as r (D) and r (F ) . Following a logic similar to that accepted for options on dividend-paying equities, we see that the Black and Scholes formula is still (F ) valid provided the underlying equity price St is replaced by St e−r τ , which gives the formulae: (F ) (D) C(St , t) = St e−r τ (d1 ) − Ke−r τ (d2 ) P (St , t) = −St e−r 20

(F )

τ

(−d1 ) + Ke−r

(D)

τ

(−d2 )

Read for example Copeland T. E. and Weston J. F., Financial Theory and Corporate Policy, Addison-Wesley, 1988. See for example Cortadon G., The pricing of options on default-free bonds, Journal of Financial and Quantitative Analysis, Vol. 17, 1982, pp. 75–100. 21

Options

where, we have: d1 d2

#

ln =

175

St σ2 + (r (D) − r (F ) ) ± R τ K 2 √ σR τ

This is known as the Garman–Kohlhagen formula.22

5.4 STRATEGIES ON OPTIONS23 5.4.1 Simple strategies 5.4.1.1 Pure speculation As we saw in Section 5.1, the asymmetrical payoff structure particular to options allows investors who hold them in isolation to proﬁt from the fall in the underlying equity price while limiting the loss (on the reduced premium) that occurs when a contrary variation occurs. The issue of a call/put option, on the other hand, is a much more speculative operation. The proﬁt will be limited to the premium if the underlying equity price remains lower/higher than the exercise price, while considerable losses may arise if the price of the underlying equity rises/falls more sharply. This type of operation should therefore only be envisaged if the issuer is completely conﬁdent that the price of the underlying equity will fall/rise. 5.4.1.2 Simultaneous holding of put option and underlying equity As the purchase of a put option allows one to proﬁt from a fall in the underlying equity price, it seems natural to link this fall to the holding of the underlying equity, in order to limit the loss inﬂicted by the fall in the price of the equity held alone. 5.4.1.3 Issue of a call option with simultaneous holding of underlying equity We have also seen (Example 1 in Section 5.1.2) that it is worthwhile issuing a call option while holding the underlying equity at the same time. In fact, when the underlying equity price falls (or rises slightly), the loss incurred thereon is partly compensated by encashment of the premium, whereas when the price rises steeply, the proﬁt that would have been realised on the underlying equity is limited to the price of the option increased by the difference between the exercise price and the underlying equity price at the beginning of the contract. 5.4.2 More complex strategies Combining options allows the creation of payoff distributions that do not exist for classic assets such as equities or bonds. These strategies are usually used by investors trying to turn very speciﬁc forecasts to proﬁt. We will look brieﬂy at the following: • straddles; • strangles; 22 Garman M. and Kohlhagen S., Foreign currency option values, Journal of International Money and Finance, No. 2, 1983, pp. 231–7. 23 Our writings are based on Reilly F. K. and Brown K. C., Investment Analysis and Portfolio Management, SouthWestern, 2000.

176

Asset and Risk Management

• spreads; • range forwards. 5.4.2.1 Straddles A straddle consists of simultaneously purchasing (resp. selling) a call option and a put option with identical underlying equity, exercise price and maturity date. The concomitant call (resp. put) corresponds to a long (resp. short) straddle. Clearly it is a question of playing volatility, as in essence, it is contradictory to play the rise and the fall in the underlying equity price at the same time. We saw in Section 5.2.3 (The Greeks: vega) that the premium of an option increases along with volatility. As a result, the short straddle (resp. long straddle) is the action of an investor who believes that the underlying equity price will vary more (resp. less) than historically regardless of direction of variation. It is particularly worth mentioning that with the short straddle, it is possible to make money with a zero variation in underlying equity price. Finally, note that the straddle (Figure 5.12) is a particular type of option known as the chooser option.24 5.4.2.2 Strangles The strangle is a straddle except for the exercise price, which is not identical for the call option and the put option, the options being ‘out of the money’. As a result: • The premium is lower. • The expected variation must be greater than that associated with the straddle. Certainly, this type of strategy presents a less aggressive risk-return proﬁle in comparison with the straddle. A comparison is shown in Figure 5.13. 5.4.2.3 Spreads Option spreads consist of the concomitant purchases of two contracts that are identical but for just one of their characteristics:

Profit Long straddle p K S

−p Short straddle

Figure 5.12 Long straddle and short straddle 24 Reilly F. K. and Brown K. C., suggest reading Rubinstein M., Options for the Undecided, in From Black–Scholes to black holes, Risk Magazine, 1992.

Options

177

Profit Long straddle

Long strangle

S Put exercise price

Call exercise price

Figure 5.13 Long strangle compared with long straddle Profit Call 1 exercise price

S Call 2 exercise price

Figure 5.14 Bull money spread

• The money spread consists of the simultaneous sale of an out of the money call option and the purchase of the same option in the money. The term bull money spread (resp. bear money spread ) is used to describe a money spread combination that gains when the underlying equity price rises (resp. falls) (see Figure 5.14). The term butterﬂy money spread is used to deﬁne a combination of bear and bull money spreads with hedging (limitation) for potential losses (and, obviously, reduced opportunities for proﬁt). • The calendar spread consists of the simultaneous sale and purchase of call or put options with identical exercise prices but different maturity dates. Spreads are used when a contract appears to have an aberrant value in comparison with another contract. 5.4.2.4 Range forwards For memory, range forwards consist of a combination of two optional positions. This combination is used for hedging, mainly for options on exchange rates.

Part III General Theory of VaR

Introduction 6 Theory of VaR 7 VaR estimation techniques 8 Setting up a VaR methodology

180

Asset and Risk Management

Introduction As we saw in Part II, the sheer variety of products available on the markets, linear and otherwise, together with derivatives and underlying products, implies a priori a multifaceted understanding of risk, which by nature is difﬁcult to harmonise. Ideally, therefore, we should identify a single risk indicator that estimates the loss likely to be suffered by the investor with the level of probability of that loss arising. This indicator is VaR. There are three classic techniques for estimating VaR: 1. The estimated variance–covariance matrix method. 2. The Monte Carlo simulation method. 3. The historical simulation method. An in-depth analysis of each of these methods will show their strong and weak points from both a theoretical and a practical viewpoint. We will now show, in detail, how VaR can be calculated using the historical simulation method. This method is the subject of the following chapter as well as a ﬁle on the accompanying CD-ROM entitled ‘Ch 8’, which contains the Excel spreadsheets relating to these calculations.

6 Theory of VaR 6.1 THE CONCEPT OF ‘RISK PER SHARE’ 6.1.1 Standard measurement of risk linked to ﬁnancial products The various methods for measuring risks associated with an equity or portfolio of equities have been studied in Chapter 3. Two types of measurement can be deﬁned: the intrinsic method and the relative method. The intrinsic method is the variance (or similarly, the standard deviation) in the return of the equity. In the case of a portfolio, we have to deal not only with variances but also with correlations (or covariances) two by two. They are evaluated practically by their ergodic estimator, that is, on the basis of historical observations (see Section 3.1). The relative method takes account of the risk associated with the equity or portfolio of equities on the basis of how it depends upon market behaviour. The market is represented by a stock-exchange index (which may be a sector index). This dependence is measured using the beta for the equity or portfolio and gives rise to the CAPM type of valuation model (see Section 3.3). The risk measurement methods for the other two products studied (bonds and options) fall into this second group. Among the risks associated with a bond or portfolio of bonds, those that are linked to interest-rate ﬂuctuations can be expressed as models. In this way (see Section 4.1) we see the behaviour of the two components of the risk posed by selling the bond during its lifetime and reinvesting the coupons, according to the time that elapses between the issue of the security and its repayment. If we wish to summarise this behaviour in a simple index, we have to consider the duration of the bond; as we are looking in this context at a ﬁrst-level approximation, a second measurement, that of convexity (see Section 4.2) will deﬁne the duration more precisely. Finally, the value of an option depends on a number of variables: underlying equity price, exercise price, maturity, volatility, risk-free rate.1 The most important driver is of course the underlying equity price, and for this reason two parameters, one of the ﬁrst order (delta) and another of the second order (gamma), are associated with it. The way in which the option price depends on the other variables gives rise to other sensitivity parameters. These indicators are known as ‘the Greeks’ (see Section 5.2).

6.1.2 Problems with these approaches to risk The ways of measuring the risks associated with these products or a portfolio of them, whatever they may make to the management of these assets, bring features with them that do not allow for immediate generalisation. 1

Possibly in two currencies if an exchange option is involved.

182

Asset and Risk Management

1. The representation of the risk associated with an equity through the variance in its returns (or through its square root, the standard deviation), or of the risk associated with an option through its volatility, takes account of both good and bad risks. A signiﬁcant variance corresponds to the possibility of seeing returns vastly different from the expected return, i.e. very small values (small proﬁts and even losses) as well as very large values (signiﬁcant proﬁts). This method does not present many inconveniences in portfolio theory (see Section 3.2), in which equities or portfolios with signiﬁcant variances are volatile elements, little appreciated by investors who prefer ‘certainty’ of return with low risk of loss and low likelihood of signiﬁcant proﬁt. It is no less true to say that in the context of risk management, it is the downside risk that needs to be taken into consideration. Another parameter must therefore be used to measure this risk. 2. The approach to the risks associated with equities in Markowitz’s theory limits the description of a distribution to two parameters: a measure of return and a measure of deviation. It is evident that an inﬁnite number of probability laws correspond to any one expected return–variance pairing. We are, in fact, looking at skewed distributions: Figure 6.1 shows two distributions that have the same expectation and the same variance, but differ considerably in their skewness. In the same way, distributions with the same expectation, variance and skewness coefﬁcient γ1 may show different levels of kurtosis, as shown in Figure 6.2. The distributions with higher peaks towards the middle and with fatter tails than a normal distribution2 (and therefore less signiﬁcant for intermediate values) are described as leptokurtic and characterised by a positive kurtosis coefﬁcient γ2 (for the f(x)

x

Figure 6.1 Skewness of distributions

√3 3 √6 6 –√ 3 – √6 2

Figure 6.2 Kurtosis of distributions 2

The deﬁnition of this law is given in Point (3) below.

√6 2

3 √

Theory of VaR

183

distributions in Figure 6.2, this coefﬁcient totals −0.6 for the triangular and −1.2 for the rectangular). Remember that this variance of expected returns approach is sometimes justiﬁed through utility theory. In fact, when the utility function is quadratic, the expected utility of the return on the portfolio is expressed solely from the single-pair expectation variance (see Section 3.2.7). 3. In order to justify the mean-variance approach, the equity portfolio theory deliberately postulates that the return follows a normal probability law, which is characterised speciﬁcally by the two parameters in question; if µ and σ respectively indicate the mean and the standard deviation for a normal random variable, this variable will have the density of: 1 1 x−µ 2 f (x) = √ exp − 2 σ 2πσ This is a symmetrical distribution, very important in probability theory and found everywhere in statistics because of the central limit theorem. The graph for this density is shown in Figure 6.3. A series of studies shows that normality of return of equities is a hypothesis that can be accepted, at least in an initial approximation, provided the period over which the return is calculated is not too short. It is admitted that weekly and monthly returns do not diverge too far from a normal law, but daily returns tend to diverge and follow a leptokurtic distribution instead.3 If one wishes to take account of the skewness and the leptokurticity of the distribution of returns, one solution is to replace the normal distribution with a distribution that depends on more parameters, such as the Pearson distribution system,4 and to estimate the parameters so that µ, σ 2 , γ1 and γ2 correspond to the observations. Nevertheless, the choice of distribution involved remains wholly arbitrary. Finally, for returns on securities other than equities, and for other elements involved in risk management, the normality hypothesis is clearly lacking and we do not therefore need to construct a more general risk measurement index. 4. Another problem, by no means insigniﬁcant, is that concepts such as duration and convexity of bonds, variances of returns on equities, or the delta, gamma, rho or theta f (x)

x

Figure 6.3 Normal distribution 3 4

We will deal again with the effects of kurtosis on risk evaluation in Section 6.2.2. Johnson N. L. and Kotz S., Continuous Univariate Distributions, John Wiley and Sons, Ltd, 1970.

184

Asset and Risk Management

option parameters do not, despite their usefulness, actually ‘say’ very much as risk measurement indices. In fact, they do not state the kind of loss that one is likely to suffer, or the probability of it occurring. At the very most, the loss–probability pairing will be calculated on the basis of variance in a case of normal distribution (see Section 6.2.2). 5. In Section 6.1.1 we set out a number of classical risk analysis models associated with three types of ﬁnancial products: bonds, equities and options. These are speciﬁc models adapted to speciﬁc products. In order to take account of less ‘classical’ assets (such as certain sophisticated derivatives), we will have to construct as many adapted models as are necessary and take account in those models of exchange-rate risks, which cannot be avoided on international markets. Building this kind of structure is a mammoth task, and the complexity lies not only in building the various blocks that make up the structure but also in assembling these blocks into a coherent whole. A new technique, which combines the various aspects of market risk analysis into a uniﬁed whole, therefore needs to be elaborated. 6.1.3 Generalising the concept of ‘risk’ The market risk is the risk with which the investor is confronted because of his lack of knowledge of future changes in basic market variables such as security rates, interest rates, exchange rates etc. These variables, also known as risk factors, determine the price of securities, conditional assets, portfolios etc. If the price of an asset is expressed as p and the risk factors that explain the price as X1 , X2 , . . . , Xn , we have the wholly general relation p = f (X1 , X2 , . . . , Xn ) + ε, in which the residue ε corresponds to the difference between reality (the effective price p) and the valuation model (the function f ). If the price valuation model is a linear model (as for equities), the risk factors combine, through the central limit theorem, to give a distribution of the variable p that is normal (at least in an rough approximation) and is therefore deﬁned only by the two expectation–variance parameters. On the other hand, for some types of security such as options, the valuation model ceases to be linear. The above logic is no longer applicable and its conclusions cease to be valid. We would point out that alongside the risk factors that we have just mentioned, the following can be added as factors in market risk: • the imperfect nature of valuation models; • imperfect knowledge of the rules and limits particular to the institution; • the impossibility of anticipating regulatory and legislative changes. Note As well as market risk, investors are confronted with other types of risk that correspond to the occurrence of exceptional events such as wars, oil crises etc. This group of risks cannot of course be estimated using techniques designed for market risk. The techniques shown in this Part III do not therefore deal with these ‘event-related’ risks. This should not, however, prevent the wise risk manager from analysing his positions using value at risk theory, or from using ‘catastrophe scenarios’, in an effort to understand this type of exceptional risk.

Theory of VaR

185

6.2 VaR FOR A SINGLE ASSET 6.2.1 Value at Risk In view of what has been set out in the previous paragraph, an index that allows estimation of the market risks facing an investor should: • • • •

be independent of any distributional hypothesis; concern only downside risk, namely the risk of loss; measure the loss in question in a certain way; be valid for all types of assets and therefore either involve the various valuation models or be independent of these models.

Let us therefore consider an asset the price5 of which is expressed as pt at moment t. The variation observed for the asset in the period [s; t] is expressed as ps,t and is therefore deﬁned as ps,t = pt − ps . Note that if ps,t is positive, we have a proﬁt; a negative value, conversely, indicates a loss. The only hypothesis formulated is that the value of the asset evolves in a stationary manner; the random variable ps,t has a probability law that only depends on the interval in which it is calculated through the duration (t − s) of that interval. The interval [s; t] is thus replaced by the interval [0; t − s] and the variable p will now only have the duration of the interval as its index. We therefore have the following deﬁnitive deﬁnition: pt = pt − p0 . The ‘value at risk’ of the asset in question for the duration t and the probability level q is deﬁned as an amount termed VaR, so that the variation pt observed for the asset during the interval [0; t] will only be less than VaR with a probability of (1 − q): Pr[pt ≤ VaR] = 1 − q Or similarly: Pr[pt > VaR] = q By expressing as Fp and fp respectively the distribution function and density function of the random variable pt , we arrive at the deﬁnition of VaR in Figures 6.4 and 6.5.

F∆p(x) 1

1–q VaR

x

Figure 6.4 Deﬁnition of VaR based on distribution function 5 In this chapter, the theory is presented on the basis of the value, the price of assets, portfolios etc. The same developments can be made on the basis of returns on these elements. The following two chapters will show how this second approach is the one that is adopted in practice.

186

Asset and Risk Management f∆p(x)

1–q x

VaR

Figure 6.5 Deﬁnition of VaR based on density function

It is evident that two parameters are involved in deﬁning the concept of VaR: duration t and probability q. In practice, it is decided to ﬁx t once for everything (one day or one week, for example), and VaR will be calculated as a function of q and expressed VaR q if there is a risk of confusion. It is in fact possible to calculate VaR for several different values of q. Example If VaR at 98 % equals −500 000, this means that there are 98 possibilities out of 100 of the maximum loss for the asset in question never exceeding 500 000 for the period in question. Note 1 As we will see in Chapter 7, some methods of estimating VaR are based on a distribution of value variation that does not have a density. For these random variables, as for the discrete values, the deﬁnition that we have just given is lacking in precision. Thus, when 1 − q corresponds to a jump in the distribution function, no suitable value for the loss can be given and the deﬁnition will be adapted as shown in Figure 6.6. In the same way, when q corresponds to a plateau in the distribution function, an inﬁnite number of values will be suitable; the least favourable of these values, that is the smallest, is chosen as a safety measure, as can be seen in Figure 6.7. In order to take account of this note, the very strict deﬁnition of VaR will take the following form: VaR q = min {V : Pr[pt ≤ V ] ≥ 1 − q}

F∆p(x)

1–q VaR

Figure 6.6 Case involving jump

x

Theory of VaR

187

F∆p(x)

1–q VaR

x

Figure 6.7 Case involving plateau

Table 6.1

Probability distribution of loss

p

Pr

−5 −4 −3 −2 −1 0 1 2 3 4

0.05 0.05 0.05 0.10 0.15 0.10 0.20 0.15 0.10 0.05

Example Table 6.1 shows the probability law for the variation in value. For this distribution, we have VaR 0.90 = −4 and VaR 0.95 = −5. Note 2 Clearly VaR is neither the loss that should be expected nor the maximum loss that is likely to be incurred, but is instead a level of loss that will only be exceeded with a level of probability ﬁxed a priori. It is a parameter that is calculated on the basis of the probability law for the variable (‘variation in value’) and therefore includes all the parameters for that distribution. VaR is not therefore suitable for drawing up a classiﬁcation of securities because, as we have seen for equities, the comparison of various assets is based on the simultaneous consideration of two parameters: the expected return (or loss) and a measure of dispersion of the said return. Note 3 On the other hand, it is essential to be fully aware when deﬁning VaR of the duration on the basis of which this parameter is evaluated. The parameter, calculated for several different portfolios or departments within an institution, is only comparable if the reference period is the same. The same applies if VaR is being used as a comparison index for two or more institutions.

188

Asset and Risk Management

Note 4 Sometimes a different deﬁnition of VaR is found,6 one that takes account not of the variation in the value itself but the difference between that variation and the expected variation. More speciﬁcally, this value at risk (for the duration t and the probability level q) is deﬁned as the amount (generally negative) termed VaR ∗ , so that the variation observed during the interval [0; t] will only be less than the average upward variation in |VaR ∗ | with a probability of (1 − q). Thus, if the expected variation is expressed as E(pt ), the deﬁnition Pr[pt − E(pt ) ≤ VaR ∗ ] = 1 − q. Or, again: Pr[pt > VaR ∗ + E(pt )] = q. It is evident that these two concepts are linked, as we evidently have VaR = VaR ∗ + E(pt ). 6.2.2 Case of a normal distribution In the speciﬁc case where the random variable pt follows a normal law with mean E(pt ) and standard deviation σ (pt ), the deﬁnition can be changed to: Pr

VaR q − E(pt ) pt − E(pt ) ≤ =1−q σ (pt ) σ (pt )

VaR q − E(pt ) is the quantile of the standard normal σ (pt ) distribution, ordinarily expressed as z1−q . As z1−q = −zq , this allows VaR to be written in a very simple form VaR q = E(pt ) − zq · σ (pt ) according to the expectation and standard deviation for the loss. In the same way, the parameter VaR ∗ is calculated simply, for a normal distribution, VaR q ∗ = −zq · σ (pt ). The values of zq are found in the normal distribution tables.7 A few examples of these values are given in Table 6.2. This shows that the expression

Table 6.2 Normal distribution quantiles q 0.500 0.600 0.700 0.800 0.850 0.900 0.950 0.960 0.970 0.975 0.980 0.985 0.990 0.995 6 7

zq 0.0000 0.2533 0.5244 0.8416 1.0364 1.2816 1.6449 1.7507 1.8808 1.9600 2.0537 2.1701 2.3263 2.5758

Jorion P., Value At Risk, McGraw-Hill, 2001. Pearson E. S. and Hartley H. O., Biometrika Tables for Statisticians, Biometrika Trust, 1976, p. 118.

Theory of VaR

189

Example If a security gives an average proﬁt of 100 over the reference period with a standard deviation of 80, we have E(pt ) = 100 and σ (pt ) = 80, which allows us to write: VaR 0.95 = 100 − (1.6449 × 80) = −31.6 VaR 0.975 = 100 − (1.9600 × 80) = −56.8 VaR 0.99 = 100 − (2.3263 × 80) = −86.1 The loss incurred by this security will only therefore exceed 31.6 (56.8 and 86.1 respectively) ﬁve times (2.5 times and once respectively) in 100 times. Note It has been indicated in Section 6.1.2 that the normality hypothesis was far from being valid in all circumstances. In particular, it has been shown that the daily returns on equities are better represented by a Pareto or Student distribution,8 that is, leptokurtic distributions. Thus, for a Student distribution with ν degrees of freedom (where ν > 2), the variance is σ2 = 1 +

2 ν −2

and the kurtosis coefﬁcient (for ν > 4) will then be: γ2 =

6 ν−4

This last quantity is always positive, and this proves that the Student distribution is leptokurtic in nature. With regard to the number of degrees of freedom ν, Table 6.3 shows the coefﬁcient γ2 and the quantiles zq for q = 0.95, q = 0.975 and q = 0.99 relative to these Student distributions,9 reduced beforehand (the variable is divided by its standard deviation) in order to make a useful comparison between these ﬁgures and those obtained on the basis of the reduced normal law. Table 6.3

Student distribution quantiles

ν

γ2

z0.95

z0.975

z0.99

6.00 1.00 0.55 0.38 0.29 0.23 0.17 0.11 0.05 0

2.601 2.026 1.883 1.818 1.781 1.757 1.728 1.700 1.672 1.645

3.319 2.491 2.289 2.199 2.148 2.114 2.074 2.034 1.997 1.960

4.344 3.090 2.795 2.665 2.591 2.543 2.486 2.431 2.378 2.326

5 10 15 20 25 30 40 60 120 normal

8 Blattberg R. and Gonedes N., A comparison of stable and student distributions as statistical models for stock prices, Journal of Business, Vol. 47, 1974, pp. 244–80. 9 Pearson E. S. and Hartley H. O., Biometrika Tables for Statisticians, Biometrika Trust, 1976, p. 146.

190

Asset and Risk Management

This clearly shows that when the normal law is used in place of the Student laws, the VaR parameter is underestimated unless the number of degrees of freedom is high. Example With the same data as above, that is, E(pt ) = 100 and σ (pt ) = 80, and for 15 degrees of freedom, we ﬁnd the following evaluations of VaR, instead of 31.6, 64.3 and 86.1 respectively. VaR 0.95 = 100 − (1.883 × 80) = −50.6 VaR 0.975 = 100 − (2.289 × 80) = −83.1 VaR 0.99 = 100 − (2.795 × 80) = −123.6

6.3 VaR FOR A PORTFOLIO 6.3.1 General results Consider a portfolio consisting of N assets in respective quantities10 n1 , . . . , nN . If the price of the j th security is termed pj , the price pP of the portfolio will of course be given by: N pP = n j pj j =1

The price variation will obey the same relation: pP =

N

nj pj

j =1

Once the distribution of the various pj elements is known, it is not easy to determine the distribution of the pP elements: the probability law of a sum of random variables will only be easy to determine if these variables are independent, and this is clearly not the case here. It is, however, possible to ﬁnd the expectation and variance for pP on the basis of expectation, variance and covariance in the various pj elements: E(pP ) =

N

nj E(pj )

j =1

var(pP ) =

N N

ni nj cov(pi , pj )

i=1 j =1 10 It can be shown that when prices are replaced by returns, the numbers nj of assets in the portfolio must be replaced by proportions Xj (positive numbers the sum of which is 1), representing the respective stock-exchange capitalisation levels of the various securities (see Chapter 3).

Theory of VaR

191

where we have, when the two indices are equal: cov(pi , pi ) = var(pi ) The relation that gives var (pP ) is the one that justiﬁes the principle of diversiﬁcation in portfolio management: the imperfect correlations ( 0 37

Appendix 4 sets out the theoretical bases for this method in brief. Gnedenko B. V., On the limit distribution for the maximum term in a random series, Annals of Mathematics, Vol. 44, 1943, pp. 423–53. Galambos J., Advanced Probability Theory, M. Dekker, 1988, Section 6.5. Jenkinson A. F., The frequency distribution of the annual maximum (or minimum) value of meteorological elements, Quarterly Journal of the Royal Meteorology Society, Vol 87, 1955, pp. 145–58. 38

VaR Estimation Techniques

231

and βn (n = 1, 2, . . .) such as the limit (for n → ∞) for the random variable Yn =

max(X1 , . . . , Xn ) − βn αn

is not degenerated, this variable will allow a law of probability that depends on a real parameter τ and deﬁned by the distribution function: 0 if y ≤ τ1 when τ < 0 1 y > τ when τ < 0 1 y real when τ = 0 FY (y) = exp −(1 − τy) τ if y < τ1 when τ > 0 1 if y ≥ τ1 when τ > 0 This is independently of the common distribution of the Xi totals.39 The probability law involved is the generalised Pareto distribution. The numbers αn , βn and τ are interpreted respectively as a dispersion parameter, a location parameter and a tail parameter (see Figure 7.6). Thus, τ < 0 corresponds to Xi values with a fat tail distribution (decreasing less than exponential), τ = 0 has a thin tail distribution (exponential decrease) and τ > 0 has a zero tail distribution (bounded support). 7.4.2.2 Estimation of parameters by regression The methods40 that allow the αn , βn and τ parameters to be estimated by regression use the fact that this random variable Yn (or more precisely, its distribution) can in practice be estimated by sampling on a historical basis: N periods each of a duration of n will supply N values for the loss variable in question. fY(y)

an

bn

0

t

y

Figure 7.6 Distribution of extremes When τ = 0, (1 − τy)1/τ is interpreted as being equal to its limit e−y . Gumbel F. J., Statistics of Extremes, Columbia University Press, 1958. Longin F. M., Extreme value theory: presentation and ﬁrst applications in ﬁnance, Journal de la Soci´et´e Statistique de Paris, Vol. 136, 1995, pp. 77–97. Longin F. M., The asymptotic distribution of extreme stock market returns, Journal of Business, No. 69, 1996, pp. 383–408. Longin F. M., From value at risk to stress testing: the extreme value approach, Journal of Banking and Finance, No. 24, 2000, pp. 1097–130. 39 40

232

Asset and Risk Management

We express the successive observations of the variation in value variable as x1 , x2 , . . ., xNn , and the extreme value observed for the i th ‘section’ of observations as yˆi,n (i = 1, . . . , N ). yˆ1,n = max(x1 , . . . , xn ) yˆ2,n = max(xn+1 , . . . , x2n ) ······ yˆN,n = max(x(N−1)n+1 , . . . , xNn ) Let us arrange these observations in order of increasing magnitude, expressing the values thus arranged yi (i = 1, . . . , N ): y1 ≤ y2 ≤ . . . ≤ yN It is therefore possible to demonstrate that if the extremes observed are in fact a representative sample of the law of probability given by the extreme value theorem, we have yi − βn i FY + ui i = 1, . . . , N = αn N +1 where the ui values correspond to a normal zero-expectation law. When this relation is transformed by taking the iterated logarithm (logarithm of logarithm) for the two expressions, we obtain: yi − βn i − ln − ln + εi = − ln − ln FY N +1 αn 1 τ y − β n + εi = − ln − ln exp − 1 − τ i αn 1 yi − βn τ + εi = − ln 1 − τ αn =

1 {ln αn − ln[αn − τ (yi − βn )]} + εi τ

This relation constitutes a nonlinear regressive equation for the three parameters αn , βn and τ . Note that when we are dealing with a matter of distribution of extremes with a thin tail (τ parameter not signiﬁcantly different from 0), we have: FY (y) = exp[− exp(−y)], and another regressive relationship has to be used: yi − βn i − ln − ln + εi = − ln − ln FY N +1 αn y − βn = − ln − ln exp − exp − i + εi αn y − βn = i + εi αn

VaR Estimation Techniques

233

7.4.2.3 Estimating parameters using the semi-parametric method As well as the technique for estimating the αn , βn and τ parameters, there are nonparametric methods,41 speciﬁcally indicated for estimating the tail parameter τ . They are, however, time consuming in terms of calculation, as an intermediate parameter has to be estimated using a Monte Carlo-type method. We show the main aspects here. The i th observation is termed x(i) after the observations are arranged in increasing order: x(1) ≤ . . . ≤ x(n) . The ﬁrst stage consists of setting a limit M so that only the M highest observations from the sample (of size n) will be of interest in shaping the tail distribution. It can be shown42 that an estimator (as termed by Hill) for the tail parameter is given by: M 1 τˆ = ln x(n−k+1) − ln x(n−M) M k=1

The choice of M is not easy to make, as the quality of Hill’s estimator is quite sensitive to the choice of this threshold. If this threshold is ﬁxed too low, the distribution tail will be too rich and the estimator will be biased downwards; if it is ﬁxed too high, there will only be a small number of observations to use for making the estimation. The optimal choice of M can be made using a graphic method43 or the bootstrap method44 , which we will not be developing here. An estimator proposed by Danielsson and De Vries for the limit distribution function is given by: 1 τˆ M x (n−M) FˆY (y) = 1 − n y This relation is variable for y ≥ x(n−M) only. 7.4.2.4 Calculation of VaR Once the parameters have been estimated, the VaR parameter can then be determined. We explain the procedure to be followed when the tail model is estimated using the semiparametric method,45 presenting a case of one risk factor only. Of course we will invert the process to some extent, given that it is the left extremity of the distribution that has to be used. The future value of the risk is estimated in exactly the same way as for the historical simulation: X(t) (1) = X(0) + (t) · X(0) 41

t = −T + 1, . . . , −1, 0

Beirland J., Teugels J. L. and Vynckier P., Practical Analysis of Extreme Values, Leuven University Press, 1996. Hill B. M., A simple general approach to inference about the tail of a distribution, Annals of Statistics, Vol. 46, 1975, pp. 1163–73. Pickands J., Statistical inference using extreme order statistics, Vol. 45, 1975, pp. 119–31. 43 McNeil A. J., Estimating the tails of loss severity distributions using extreme value theory, Mimeo, ETH Zentrum Zurich. 1996. 44 Danielsson J. and De Vries C., Tail index and quantile estimation with very high frequency data, Mimeo, Iceland University and Tinbergen Institute Rotterdam, 1997. Danielsson J. and De Vries C., Tail index and quantile estimation with very high frequency data, Journal of Empirical Finance, No. 4, 1997, pp. 241–57. 45 Danielsson J. and De Vries C., Value at Risk and Extreme Returns, LSE Financial Markets Group Discussion Paper 273, London School of Economics, 1997. Embrechts P., Kl¨uppelberg C. and Mikosch T., Modelling Extreme Events for Insurance and Finance, Springer Verlag, 1999. Reiss R. D. and Thomas M., Statistical Analysis of Extreme Values, Birkhauser Verlag, 2001. 42

234

Asset and Risk Management

The choice of M is made using one of the methods mentioned above, and the tail parameter is estimated by: M 1 ln x(k) − ln x(M+1) τˆ = M k=1 The adjustment for the left tail of the distribution is made by M FˆY (y) = n

x(M+1) y

1 τˆ

This relation is valid for y ≥ x(M+1) only. The distribution tail is simulated46 by taking a number of values at random from the re-evaluated distribution of the X(t) (1) values and by replacing each x value lower than x(M+1) by the corresponding value obtained from the distribution of the extremes, that is, for the level of probability p relative to x, through the xˆp solution of the equation M p= n

x(M+1) xˆp

In other words: xˆp = x(M+1)

M np

1 τˆ

τ

.

Note Extreme value theory, which allows the adverse effects of one or more outliers to be avoided, has a serious shortcoming despite its impressive appearance. The historical period that has to be used must have a very long duration, regardless of the estimation method used. In fact: • In the method based on nonlinear regression, Nn observations must be lodged in which the duration n on which an extreme value is measured must be relatively long. The extreme value theorem is an asymptotic theorem in which the number N of durations must be large if one wishes to work with a sample distribution that is representative of the actual distribution of the extremes. • In the semi-parametric method, a large number of observations are abandoned as soon as the estimation process starts.

7.5 ADVANTAGES AND DRAWBACKS We now move on to review the various advantages and drawbacks of each VaR estimation technique. To make things simpler, we will use the abbreviations shown in Figure 7.2: VC for the estimated variance–covariance matrix method, MC for the Monte Carlo simulation and HS for historical simulation. 46 This operation can also be carried out by generating a uniform random variable in the interval [0; 1], taking the reciprocal for the observed distribution of X(t) (1) and replacing the observed values lower than x(M+1) by the value given by the extremes distribution.

VaR Estimation Techniques

235

7.5.1 The theoretical viewpoint 7.5.1.1 Hypotheses and limitations (1) Let us envisage ﬁrst of all the presence or absence of a distributional hypothesis and its likely impact on the method. MC and HS do not formulate any distributional hypothesis. Only VC assumes that variations in price are distributed according to a normal law.47 Here, this hypothesis is essential: • because of the technique used to split the assets into cashﬂows: only multinormal distribution is such that the sum of the variables, even when correlated, is still distributed according to such a law; • because the information supplied by RiskMetrics includes the −zq σk values (k = 1, . . . , n) of the VaR ∗ parameter for each risk factor, and zq = 1.645, that is, the normal distribution quantile for q = 0.95. This hypothesis has serious consequences for certain assets such as options, as the returns are highly skewed and the method can no longer be applied. It is for this reason that RiskMetrics introduced a method based on the quantile concept for this type of asset, similar to MC and VC. For simpler assets such as equities, it has been demonstrated that the variations in price are distributed according to a leptokurtic law (more pointed than the normal close to the expectation, with thicker tails and less probable intermediate values). The normality hypothesis asserts that the VaR value is underestimated for such leptokurtic distributions, because of the greater probability associated with the extreme values. This phenomenon has already been observed for the Student distributions (see Section 6.2.2). It can also be veriﬁed for speciﬁc cases. Example Consider the two distributions in Figure 6.2 (Section 6.1.2), in which the triangular, deﬁned by √ √ √ 3 − |x| x ∈ − 3; 3 f1 (x) = 3 has thicker tails than the rectangular for which √ √ √ 6 6 6 f2 (x) = x∈ − ; 6 2 2 Table 7.6 shows a comparison of two distributions. The phenomenon of underestimation of risk for leptokurtic distributions is shown by the fact that: √ VaR q (triangular) = 6(1 − q) − 3 √ 6 VaR q (rectangular) = 6(1 − q) − > VaR q (triangular) 2 47

In any case, it formulates the conditional normality hypothesis (normality with changes in variation over time).

236

Asset and Risk Management Table 7.6 Comparison of two distributions Triangular µ σ2 γ1 γ2

0 0.5 0 −0.6

Rectangular 0 0.5 0 −1.2

In addition, numerical analyses carried out by F. M. Longin have clearly shown the underestimation of the VaR for leptokurtic distributions under normality hypothesis. They have also shown that the underestimation increases as q moves closer to the unit; in other words, we have an interest in extreme risks. Thus, for a market portfolio represented by an index, he calculated that: VaR 0.5 (HS) = 1.6 VaR 0.5 (VC) VaR 0.75 (HS) = 2.1 VaR 0.75 (VC) VaR 0.95 (HS) = 3.5 VaR 0.95 (VC) VaR 0.99 (HS) = 5.9 VaR 0.99 (VC) This problem can be solved in VC by the consideration, in the evolution model for the variable return Rj t = µj + σt εj t , for a residual distributed not normally but in accordance with a generalised law of errors smoother than the normal law. In the MC and HS methods, the normality hypothesis is not formulated. In MC, however, if the portfolio is not re-evaluated by a new simulation, the hypothesis will be required but only for this section of the method. (2) The VC method, unlike MC and HS, relies explicitly on the hypothesis of asset price linearity according to the risk factors. This hypothesis forms the basis for the principle of splitting assets into cashﬂows. It is, however, ﬂawed for certain groups of assets, such as options: the linear link between the option price and the underlying equity price assumes that is the only non-zero sensitivity parameter. For this reason, RiskMetrics has abandoned the VC methodology and deals with this type of product by calling on Taylor’s development. Another estimation technique, namely MC, is sometimes indicated for dealing with this group of assets. (3) The hypothesis of stationarity can take two forms. In its more exacting form, it suggests that joint (theoretical and unknown) distribution of price variations in different risk factors, for the VaR calculation horizon, is well estimated in the observations of variations in these prices during the historical period available. The hypothesis of stationarity is formulated thus for the HS method. However, if it is not veriﬁed because of the presence of a trend in the observed data, it is easy to take account of the trend for estimating the future value of the portfolio. A ‘softer’ form is recommended for applying the VC method, as this method no longer relates to the complete distribution; the statistical parameters measured on the observed distribution of the price (or return) variances are good estimations of these same (unknown) parameters for the horizon for which the VaR is being estimated. The VC

VaR Estimation Techniques

237

method does, however, have the drawback of being unable to depart from this hypothesis if a trend is present in the data. (4) In the presentation of the three estimation methods, it is assumed that the VaR calculation horizon was equal to the periodicity of the historical observations.48 The usual use of VaR involves making this period equal to one day for the management of dealing room portfolios and 10 days according to prudential regulations,49 although a longer period can be chosen when measuring the risk associated with stable products such as investment funds. If, on the other hand, one wishes to consider a horizon (say one month) longer than the observation period (say one day), three methods may be applied: • Estimating the VaR on the basis of monthly returns, even if the data are daily in nature. This leads to serious erosion of the accuracy of the initial observations. • Using the formulae set out in the note in Section 7.1.2, which consist of multiplying the expectation loss and the loss variance respectively by the horizon (here, the number of working days in the month) and the square root of the horizon. This is of course only valid with a hypothesis of independence of daily variations and for methodologies that calculate the VaR on the basis of these two parameters only (case of normal distribution) such as VC. When HS cannot rely on the normality hypothesis, this method of working is incorrect50 and the previous technique should be applied. • For MC and for this method only, it is possible to generate not only a future price value but a path of prices for the calculation horizon. We now explain this last case a little further, where, for example, the price evolution of an equity is represented by geometric Brownian motion (see Section 3.4.2): St+dt − St = St · (ER · dt + σR · dwt ) where the Wiener process (dwt ) obeys a law with a zero expectation and a variance equal to dt. If one considers a normal random variable ε with zero expectation and variance of 1, we can write: √ St+dt − St = St · (ER · dt + σR · ε dt) Simulation of a sequence of independent values for ε using the Monte Carlo method allows the variations St+dt − St to be obtained, and therefore, on the basis of the last price observed S0 , allows the path of the equity’s future price to be generated for a number of dates equal to the number of ε values simulated.51 7.5.1.2 Models used (1) The valuation models play an important part in the VC and MC methods. In the case of VC, they are even associated with a conditional normality hypothesis. For MC, the 48 The usual use of VaR involves making this period equal to one day. However, a longer period can be chosen when measuring the risk associated with stable products such as investment founds. 49 This 10-day horizon may, however, appear somewhat unrealistic when the speed and volume of the deals conducted in a dealing room is seen. 50 As is pointed out quite justiﬁably by Hendricks D., Evaluation of value at risk models using historical data, FRBNY Policy Review, 1996, pp. 39–69. 51 The process that we have described for MC is also applicable, provided sufﬁcient care is taken, for a one-day horizon, with this period broken down into a small number of subperiods.

238

Asset and Risk Management

search for a model is an essential (and difﬁcult) part of the method; however, as there is a wide variety of models on offer, there is some guarantee as to the quality of results. Conversely, the HS method is almost completely independent of these models; at the most, it uses them as a pricing tool for putting together databases for asset prices. Here is one of the many advantages of this method, which have their source in the conceptual simplicity of the technique in question. To sum up, the risk associated with the quality of the models used is: • signiﬁcant and untreatable for VC; • signiﬁcant but manageable for MC; • virtually zero for HS. (2) As VC and HS are based on a hypothesis of stationarity, the MC method is the only one to make intensive use of asset price development models over time (dynamic models). These models can improve the results of this method, provided the models are properly adapted to the data and correctly estimated. 7.5.1.3 Data The data needed for supplying the VC methods in its RiskMetrics version are: • the partial VaRs for each of the elementary risks; • the correlation matrix for the various risk-factor couples. Thus, for n risk factors, n(n + 1)/2 different data are necessary. If for example one is considering 450 elementary risk factors, 101 475 different data must be determined daily. Note that the RiskMetrics system makes all these data available to the user. The MC method consumes considerably less data; in addition to the history of the various risk factors, a number of correlations (between risk factors that explain the same asset) are essential. However, if the portfolio is re-evaluated by a new simulation in order to avoid the normality hypothesis, the variance–covariance matrix for the assets in the portfolio will be essential. Finally, the HS method is the least data consuming; as the historical periods already contain the structure of the correlation between risk factors and between assets, this last information does not need to be obtained from an outside body or calculated on the basis of historical periods. 7.5.2 The practical viewpoint 7.5.2.1 Data Most of the data used in the VC method cannot be directly determined by an institution applying a VaR methodology. Although the institution knows the composition of its portfolio and pays close attention to changes in the prices of the assets making up the portfolio, it cannot know the levels of volatility and the correlations of basic risk factors, some of which can only be obtained by consulting numerous outside markets. The VC method can therefore only be effective if all these data are available in addition, which is the case if the RiskMetrics system is used. This will, however, place the business at a

VaR Estimation Techniques

239

disadvantage as it is not provider of the data that it uses. It will not therefore be possible to analyse these data critically, or indeed make any corrections to them in case of error. Conversely, the MC method and especially the HS method will use data from inside the business or data that can be easily calculated on the basis of historical data, with all the ﬂexibility that this implies with respect to their processing, conditioning, updating and control.

7.5.2.2 Calculations Of course, the three methods proposed require a few basic ﬁnancial calculations, such as application of the principle of discounting. We now look at the way in which the three techniques differ from the point of view of calculations to be made. The calculations required for the HS method are very limited and easily programmable on ordinary computer systems, as they are limited to arithmetical operations, sorting processes and the use of one or another valuation model when the price to be integrated into the historical process is determined. The VC method makes greater use of the valuation models, since the principle of splitting assets and mapping cashﬂows is based on this group of models and since options are dealt with directly using the Black and Scholes model. Most notably, these valuation models will include regressions, as equity values are expressed on the basis of national stock-exchange indices. In addition, the matrix calculation is made when the portfolio is re-evaluated on the basis of the variance–covariance matrix. In contrast to these techniques, which consume relatively little in terms of calculations (especially HS), the MC method requires considerable calculation power and time: • valuation models (including regressions), taking account of changes over time and therefore estimations of stochastic process parameters; • forecasting, on the basis of historical periods, of a number of correlations (between the risk factors that explain the same asset on one hand, and between assets in the same portfolio for the purpose of its re-valuation on the other hand); • matrix algebra, including the Choleski decomposition method; • ﬁnally and most signiﬁcantly, a considerable number of simulations. Thus, if M is the number of simulations required in the Monte Carlo method to obtain a representative distribution and the asset for which a price must be generated depends on n risk factors, a total of nM simulations will be necessary for the asset in question. If the portfolio is also revalued by simulation (with a bulky variance–covariance matrix), the number of calculations increases still further.

7.5.2.3 Installation and use The basic principles of the VC method, with splitting of assets and mapping of cashﬂows, cannot be easily understood at all levels within the business that uses the methodology; and the function of risk management cannot be truly effective without positive assistance from all departments within the business. In addition, this method has a great advantage: RiskMetrics actually exists, and the great number of data that supply the system are

240

Asset and Risk Management

Table 7.7 Advantages and drawbacks VC Distributional hypothesis Linearity hypothesis Stationarity hypothesis Horizon Valuation models Dynamic models Required data Source of data Sensitivity Calculation

Set-up Understanding Flexibility Robustness

MC

HS

Conditional normality

No

No

Taylor if options

No

No

Yes

No

1 observation period Yes (unmanageable risk) No –Partial VaR ∗ –Correlation matrix External Average –Valuation models –Matrix calculation

Paths (any duration) Yes (manageable risk) Yes Histories (+ var-cov. of assets) In house Average –Valuation models –Statistical estimates –Matrix calculation –Simulations Difﬁcult Average Low Good

Method to be adapted if trend 1 observation period External External Historices

Easy Difﬁcult Low Too many hypotheses

In house Outliers External valuation models Easy Easy Good Good

also available. The drawback, of course, is the lack of transparency caused by the external origin of the data. Although the basic ideas of the MC method are simple and natural, putting them into practice is much more problematic, mainly because of the sheer volume of the calculation. The HS method relies on theoretical bases as simple and natural as those of the MC method. In addition, the system is easy to implement and its principles can be easily understood at all levels within a business, which will be able to adopt it without problems. In addition, it is a very ﬂexible methodology: unlike the other methods, which appear clumsy because of their vast number of calculations, aggregation can be made at many different levels and used in many different contexts (an investment fund, a portfolio, a dealing room, an entire institution). Finally, the small number of basic hypotheses and the almost complete absence of complex valuation models makes the HS method particularly reliable in comparison with MC and especially VC. Let us end by recalling one drawback of the HS method, inherent in the simplicity of its design: its great sensitivity to quality of data. In fact, one or a few more outliers (whether exceptional in nature or caused by an error) will greatly inﬂuence the VaR value over a long period (equal to the duration of the historical periods). It has been said that extreme value theory can overcome this problem, but unfortunately, the huge number of calculations that have to be made when applying it is prohibitive. Instead, we would recommend that institutions using the HS method set

VaR Estimation Techniques

241

up a very rigorous data control system and systematically analyse any exceptional observations (that is, outliers); this is possible in view of the internal nature of the data used here. 7.5.3 Synthesis We end by setting out a synoptic table52 shown in Table 7.7 of all the arguments put forward.

52 With regard to the horizon for the VC method, note that the √ VAR can be obtained for a longer horizon H than the periodicity of observations, by multiplying the VAR for a period by H , except in the case of optional products.

8 Setting Up a VaR Methodology The aim of this chapter is to demonstrate how the VaR can be calculated using the historical simulation method. So that the reader can work through the examples speciﬁcally, we felt it was helpful to include a CD-ROM of Excel spreadsheets in this book. This ﬁle, called ‘CH8.XLS’, contains all the information relating to the examples dealt with below. No part of the sheets making up the ﬁle has been hidden so that the calculation procedures are totally transparent. The examples presented have been deliberately simpliﬁed; the actual portfolios of banks, institutions and companies will be much more complex than what the reader can see here. The great variety of ﬁnancial products, and the number of currencies available the world over, have compelled us to make certain choices. In the ﬁnal analysis, however, the aim is to explain the basic methodology so that the user can transpose historical simulation into the reality of his business. Being aware of the size of some companies’ portfolios, we point out a number of errors to be avoided in terms of simpliﬁcation.

8.1 PUTTING TOGETHER THE DATABASE 8.1.1 Which data should be chosen? Relevant data is fundamental. As VaR dreals with extreme values in a series of returns, a database error, which is implicitly extreme, will exert its inﬂuence for many days. The person responsible for putting together the data should make a point of testing the consistency of the new values added to the database every day, so that it is not corrupted. The reliability of data depends upon: • the source (internal or external); • where applicable, the sturdiness of the model and the hypotheses that allow it to be determined; • awareness of the market; • human intervention in the data integration process. Where the source is external, market operators will be good reference points for specialist data sources (exchange, long term, short term, derivatives etc.). Sources may be printed (ﬁnancial newspapers and magazines) or electronic (Reuters, Bloomberg, Telerate, Datastream etc.). Prices may be chosen ‘live’ (what is the FRA 3–6 USD worth on the market?) or calculated indirectly (calculation of forward-forward on the basis of three and six months Libor USD, for example). The ultimate aim is to provide proof of consistency as time goes on. On a public holiday, the last known price will be used as the price for the day.

244

Asset and Risk Management

8.1.2 The data in the example We have limited ourselves to four currencies (EUR, PLN, USD and GBP), in weekly data. For each of these currencies, 101 dates (from 19 January 2001 to 20 December 2002) have been selected. For these dates, we have put together a database containing the following prices: • 1, 2, 3, 6 and 12, 18, 24, 36, 48 and 60 months deposit and swap rates for EUR and PLN, and the same periods but only up to 24 months for USD and GBP. • Spot rates for three currency pairs (EUR/GBP, EUR/PLN and EUR/USD). The database contains 3737 items of data.

8.2 CALCULATIONS 8.2.1 Treasury portfolio case The methodology assumes that historical returns will be applied to a current portfolio in order to estimate the maximum loss that will occur, with a certain degree of conﬁdence, through successive valuations of that portfolio. The ﬁrst stage, which is independent of the composition of the portfolio, consists of determining the past returns (in this case, weekly returns). 8.2.1.1 Determining historical returns As a reminder (see Section 3.1.1), the formula that allows the return to be calculated1 is: Rt =

Ct − Ct−1 Ct−1

For example, the weekly return for three months’ deposit USD between 19 January 2001 and 26 January 2001 is: 0.05500 − 0.05530 = −0.5425 % 0.05530 The results of applying the rates of return to the databases are found on the ‘Returns’ sheet within CH8.XLS. For 101 rates, 100 weekly returns can be determined. 8.2.1.2 Composition of portfolio The treasury portfolio is located on the ‘Portfolios’ sheet within CH8.XLS. This sheet is entirely ﬁctitious, and has base in economic reality either for the dates covered by the sample and even less at the time you read these lines. The only reality is the prices and rates that prevail at the dates chosen (and de facto the historical returns). The investor’s currency is the euro. The term ‘long’ (or ‘short’) indicates: • in terms of deposits, that the investor has borrowed (lent). 1

We were able to use the other expression for the return, that is, ln

Ct (see also Section 3.1.1). Ct−1

Setting Up a VaR Methodology

245

• in terms of foreign exchange, that the investor has purchased (sold) the ﬁrst currency (EUR in the case of EUR/USD) in exchange for the second currency in the pair. We have assumed that the treasury portfolio for which the VaR is to be calculated contains only new positions for the date on which the maximum loss is being estimated: 20 December 2002. In a real portfolio, an existing contract must of course be revalued in relation to the period remaining to maturity. Thus, a nine-month deposit that has been running for six months (remaining period therefore three months) will require a database that contains the prices and historical returns for three-month deposits in order to estimate the maximum loss for the currency in question. In addition, some interpolations may need to be made on the curve, as the price for some broken periods (such as seven-month deposits running for four months and 17 days) does not exist in the market. Therefore, for each product in the treasury portfolio, we have assumed that the contract prices obtained by the investor correspond exactly to those in the database on the date of valuation. The values in Column ‘J’ (‘Initial Price’) in the ‘Portfolios’ sheet in CH8.XLS for the treasury portfolio will thus correspond to the prices in the ‘Rates’ sheet in CH8.XLS for 20 December 2002 for the products and currencies in question. 8.2.1.3 Revaluation by asset type We have said that historical simulation consists of revaluing the current portfolio by applying past returns to that portfolio; the VaR is not classiﬁed and determined until later. Account should, however, be taken of the nature of the product when applying the historical returns. Here, we are envisaging two types of product: • interest-rate products; • FX products. A. Exchange rate product: deposit Introduction: Calculating the VBP We saw in Section 2.1.2 that the value of a basis point (VBP) allowed the sensitivity of an interest rate position to a basis point movement to be calculated, whenever interest rates rise or fall. Position 1 of the treasury portfolio (CH8.XLS, ‘Portfolios’ sheet, line 14) is a deposit (the investor is ‘long’) within a GBP deposit for a total of GBP50 000 000 at a rate of 3.9400 %. The investor’s interest here is in the three-month GBP rate increasing; he will thus be able to reinvest his position at a more favourable rate. Otherwise, the position will make a loss. More generally, however, it is better to pay attention to the sensitivity of one’s particular position. The ﬁrst stage consists of calculating the interest on the maturity date: I =C·R·

ND DIV

246

Asset and Risk Management

Here: I represents the interest; C represents the nominal; R represents the interest rate; ND represents the number of days in the period; DIV represents the number of days in a year for the currency in question. I = 50 000 000 × 0.0394 × 90/365 = 485 753.42. Let us now assume that the rates increase by one basis point. The interest cashﬂow at the maturity date is thus calculated on an interest rate base of: 0.0394 + 0.0001 = 0.0395. We therefore obtain: I = 50 000 000 × 0.0395 × 90/365 = 486 986.30 As the investor in the example is ‘long’, that is, it is better for him to lend in order to cover his position, he will gain: I = 50 000 000 × 0.0001 × 90/365 = |485 753.42 − 486 986.30| = 1 232.88 GBP every time the three-month GBP rate increases by one basis point. Historical return case The VBP assumes a predetermined variation of one basis point every time, either upwards or downwards. In the example, this variation equals a proﬁt (rise) or loss (fall) of 1232.88 GBP. In the same way, we can apply to the current rate for the position any other variation that the investor considers to be of interest: we stated in Section 2.1.2 that this was the case for simulations (realistic or catastrophic). However, if the investor believes that the best forecast2 of future variations in rates is a variation that he has already seen in the past, all he then needs to do is apply a series of past variations to the current rate (on the basis of past returns) and calculate a law of probability from that. On 19 January 2001, the three-month GBP was worth 5.72 %, while on 26 January 2001 it stood at 5.55 %. The historical return is −2.9720 % (‘Returns’ sheet, cell AG4). This means that: 0.0572 × (1 + (−0.02972)) = 0.0555. If we apply this past return to the current rate for the position (‘Portfolios’ sheet, cell J14), we will have: 0.0394 × (1 + (−0.02972)) = 0.038229. This rate would produce interest of: I = 50 000 000 × 0.038229 × 90/365 = 471 316.70. As the investor is ‘long’, this drop in the three-month rate would produce a loss in relation to that rate, totalling: 471 316.70 − 485 753.42 = −14 436.73 GBP. The result is shown on the ‘Treasury Reval’ sheet, cell D3. Error to avoid Some people may be tempted to proceed on the basis of difference from past rate: 0.055500 − 0.057200 = −0.0017. And then to add that difference to the current rate: 2 The argument in favour of this assumption is that the variation has already existed. The argument against, however, is that it cannot be assumed that it will recur in the future.

Setting Up a VaR Methodology

247

0.0394 − 0.0017 = 0.0377. This would lead to a loss of: I = 50 000 000 × (−0.0017) × 90/365 = −20 958.90. This is obviously different from the true result of −14 436.73. This method is blatantly false. To stress the concepts once again, if rates moved from 10 % to 5 % within one week (return of −50 %) a year ago, with the differential applied to a current position valued at a rate of 2 %, we would have a result of: 0.02 × (1 − 0.50) = 0.01 with the right method. 0.02 − (0.10 − 0.05) = −0.03 with the wrong method. In other words, it is best to stick to the relative variations in interest rates and FX rates and not to the absolute variations. B. FX product: spot Position 3 in the treasury portfolio (CH8.XLS, ‘Portfolios’ sheet, line 16) is a purchase (the investor is ‘long’) of EUR/USD for a total of EUR75 000 000 at a price of USD1.0267 per EUR. Introduction: calculating the value of a ‘pip’ A ‘pip’ equals one-hundredth of a USD cent in a EUR/USD quotation, that is, the fourth ﬁgure after the decimal point. The investor is ‘long’ as he has purchased euros and paid for the purchase in USD. A rise (fall) in the EUR/USD will therefore be favourable (unfavourable) for him. In the same way as the VBP for rate products, the sensitivity of a spot exchange position can be valued by calculating the effect of a variation in a ‘pip’, upwards or downwards on the result for the position. The calculations are simple: 75 000 000 × 1.0267 = 77 002 500 75 000 000 × 1.0268 = 77 010 000 77 010 000 − 77 002 500 = 7500 Example of historical returns On 19 January 2001, the spot EUR/USD was worth 0.9336, while on 26 January 2001 it stood at 0.9238. The historical return (‘Returns’ sheet, cell AO4) is −1.0497 %. This means that: 0.9336 × (1 + (−1.0497)) = 0.9238. By applying Position 3 of the treasury portfolio to the current rate, we have: 1.0267 × (1 + (−1.0497)) = 1.01592273. The investor’s position is ‘long’, so a fall in the EUR/USD rate will be unfavourable for him, and the loss (in USD) will be: 75 000 000 × ((1.0267 × (1 + (−1.0497)) − 1.0267)) = 75 000 000 × (1.01592273 − 1.0267) = −808 295.31 This result is displayed in cell F3 of the ‘Treasury Reval’ sheet. 8.2.1.4 Revaluation of the portfolio The revaluation of the treasury portfolio is shown in the table produced by cells from B2 to G102, on the ‘Treasury Reval’ sheet.

248

Asset and Risk Management

For each of the positions, from 1–3, we have applied 100 historical returns (from 26 January 2001 to 20 December 2002) in the currency in question (GBP, USD, EUR). The total shown is the loss (negative total) or proﬁt (positive total) as calculated above taking account of past returns. Let us take as an example the ﬁrst revaluation (corresponding to the 26 January 2001 return) for Position 1 of the portfolio (cell D3 in the ‘Treasury Reval’ sheet). The formula that allows the loss or proﬁt to be calculated consists of the difference in interest receivable at the current (initial) price of the position and the interest receivable in view of the application to the initial price of the corresponding historical return on 26 January 2001. We therefore have the general formula: ND ND − C·R· L = C · R · (1 + HR) · DIV DIV ND ND =C· R · (1 + HR) · − R· DIV DIV Here: L is the loss; C is the total to which the transaction relates; L is the current rate (initial price) of the transaction; HR is the historical return. It is this last formula that is found in cells D3 to F102. Of course we could have simpliﬁed3 it here: ND ND L=C· R · (1 + HR) · − R· DIV DIV ND · ((1 + HR) − 1) DIV ND =C·R· · HR DIV =C·R·

If the investor is ‘long’, he has borrowed and will wish to cover himself by replacing his money at a higher rate than the initial price. Therefore, if HR is a negative (positive) total, and he has realised a loss (proﬁt). This is the case for Position 1 of the portfolio on 26 January 2001. On the other hand, if the investor is ‘short’, he has lent and will wish to cover himself by borrowing the money at a lower rate than the initial price. Therefore, if HR is a negative (positive) total, P must be positive (negative) and the preceding formula (valid if the investor is ‘long’) must be multiplied by −1. For Position 2 of the portfolio, the investor is ‘short’ and we have (cell E3 of the ‘Treasury Reval’ sheet: ND ND − R· L = (−1) · C · R · (1 + HR) · DIV DIV 3

We have not simpliﬁed it, so that the various components of the difference can be seen more clearly.)

Setting Up a VaR Methodology

249

On each past date, we have a loss or proﬁt expressed in the currency of the operation for each position. As the investor has the euro for his national or accounting currency, we have summarised the three losses or gains in EUR equivalents at each date. The chosen FX rate for the euro against the other currencies is of course the rate prevailing on the date of calculation of the VaR, that is, 20 December 2002. The overall loss is shown in column G of the ‘Treasury Reval’ sheet. 8.2.1.5 Classifying the treasury portfolio values and determining the VaR When all the revaluations have been carried out, we have (see ‘Treasury Reval’ sheet) a series of 100 losses or proﬁts according to historical return date. One has to classify them in increasing order, that is, from the greatest loss to the smallest. The reader will ﬁnd column G of the ‘Treasury Reval’ sheet classiﬁed in increasing order on the ‘Treasury VaR’ sheet, in column B. To the right of this column, 1 − q appears. A. Numerical interpretation We think it important to state once again that when 1 − q corresponds to a plateau of the loss distribution function, we have chosen to deﬁne VaR as the left extremity of the said section (see Figure 6.7). We therefore say that: • There are 66 chances out of 100 that the actual loss will be −EUR360 822 or less (1 − q = 0.34), or VaR 0.66 = −360 822. • There are 90 chances out of 100 that the actual loss will be −EUR1 213 431 or less (1 − q = 0.10), or VaR 0.90 = −1 213 431. • There are 99 chances out of 100 that the actual loss will be −EUR2 798 022 or less (1 − q = 0.01), or VaR 0.99 = −2 798 022. B. Representation in graphical form If the forecast of losses is shown on the x-axis and 1 − q is shown on the y-axis, the estimated loss distribution will be obtained. Figure 8.1, also appears on the ‘Treasury VaR’ sheet. Treasury VaR 1.00 0.90 0.80 0.70 Probability

0.60 0.50 0.40 0.30 0.20 0.10

–3 000 000

–2 000 000

0.00 –1 000 000 0 Estimated loss

1 000 000

Figure 8.1 Estimated loss distribution of treasury portfolio

2 000 000

250

Asset and Risk Management

8.2.2 Bond portfolio case The ﬁrst stage once again consists of determining the past returns (in this case, weekly). 8.2.2.1 Past variations to be applied The main difﬁculty connected with this type of asset, in terms of determining VaR, is the question of whether or not the historical prices or rates are available. When a bond is ﬁrst issued, for example, it has to be acknowledged that we do not have any historical prices. As the aim of this chapter is merely to show how VaR can be calculated using the historical simulation method, using deliberately simpliﬁed examples, we have used a range of rates for deposits and swaps on the basis of which we will construct our example. We did not therefore wish to use bond historical prices as a basis. A. Yield The price of a bond is known on at least one date: that for which we propose to determine the VaR (in our example, 20 December 2002). Using this price, and by taking into account the calculation date, the maturity date, coupon date, the price on maturity, the basis of calculation and the frequency of the coupon payments, the ‘yield to maturity’ or YTM, can be calculated as shown in Section 4.1.2. Columns H3 to H9 of the ‘Portfolios’ sheet show the relative yields for the bond in our ﬁctitious portfolio. As not all versions of Excel contain the ‘yield’ ﬁnancial function, we have copied the values into columns I3 to I9. It is to this yield to maturity that we intend to apply the variations relative to corresponding rates of deposit and/or swaps, in terms of maturity, for the remaining period of the corresponding bond. We are of course aware that this method is open to criticism as the price, if we had used it, not only reﬂects general interest-rate levels, but also carries a dimension of credit risk and lack of liquidity. B. Interpolation of rates We cannot deduce from the ‘Rates’ sheet the returns to be applied to the yield to maturity; the remaining periods are in fact broken. We have determined (in the ‘Bonds Interp’ sheet), the two maturity dates (columns I and J in that sheet) that straddle the remaining period, together with the portion of rate differential to be added to the lower rate (column F divided by column H). Readers will ﬁnd the value of the rates to be interpolated (taken from the ‘Rates’ sheet) in the ‘Variation Bonds’ sheet, the rate differential to which the rule of interpolation mentioned above is applied. For bond 1 in our portfolio, this calculation is found in column G. All that remains now is to determine the return relative to the series of synthetic rates in exactly the same way as shown in Section 8.2.1. The returns applicable to the yield to maturity for bond 1 in the portfolio are thus shown in column H of the ‘Variation Bonds’ sheet. 8.2.2.2 Composition of portfolio The ‘bond’ portfolio is found on the ‘Portfolios’ sheet in CH8.XLS. This sheet, like the rest of the portfolio, is purely ﬁctitious. The investor’s national currency is the euro.

Setting Up a VaR Methodology

251

The portfolio is ‘long’, with six bonds, for which the following are given: • • • • •

currency; coupon; maturity date; ISIN code; last known price (this is the ‘bid’, because if the position is closed, one should expect to deal at the bid price); • yield on maturity (in formula form in column H, in copied value in column I); • basis of calculation (current/current or 30/360); • frequency of payment of the coupon. 8.2.2.3 Portfolio revaluation In Table B2–H9, the ‘Losses Bonds’ sheet summarises the portfolio data that we need in order to revalue it. Remember that we propose to apply the relative variations in rates (column L for bond 1, column Q for bond 2 etc.), to the yield to maturity (column C) of each bond that corresponds, in terms of maturity, to the period still outstanding. A new yield to maturity is therefore deduced (column M for bond 1); it is simply the current total to which a past variation has been applied. We explained above that starting from the last known price of a bond, and taking account of the date of the calculation as well as the expiry date, the coupon date, the price on maturity, the basis of calculation and the frequency of the coupon, we deduce the yield to maturity. It is possible, in terms of correlations, to start from our ‘historical’ yields to maturity in order to reconstruct a synthesised price (column N). The ‘Price’ function in Excel returns a price on the basis of the given yield to maturity (column M) and of course that of the date of the calculation as well as the expiry date, coupon date, price on maturity, basis of calculation and frequency of coupon. As not all versions of Excel contain the ‘Price’ function, we have copied the values from column N into column O for bond 1, from column S into column T for bond 2, etc. All that now remains is to compare the new price to the last known price, and to multiply this differential by the nominal held in the portfolio in order to deduce the resulting proﬁt or loss (column P for bond 1). As indicated in cell B11, we assume that we are holding a nominal of ¤100 million on each of the six bond lines. Note It may initially seem surprising that the nominal used for bond 1 (expressed in PLN) is also ¤100 million. In fact, rather than expressing the nominal in PLN, calculating the loss or proﬁt and dividing the total again by the same EUR/PLN rate (that is, 3.9908 at 20 December 2002), we have immediately expressed the loss for a nominal expressed in euros. It is then sufﬁcient (column AP) to summarise the six losses and/or proﬁts for each of the 100 dates on each line (with respect for the correlation structure). 8.2.2.4 Classifying bond portfolio values and determining VaR Once all the new valuations have been made, a series of 100 losses or proﬁts (‘Losses Bonds’ sheet) will be shown according to historical return date. One has to classify them

252

Asset and Risk Management Bond portfolio VaR 1.00 0.90 0.80 0.70 Probability

0.60 0.50 0.40 0.30 0.20 0.10

0.00 –1 600 000 –1 100 000 –600 000 –100 000 400 000 Estimated loss

900 000

1 400,000

1 900,000

Figure 8.2 Estimated loss distribution of bond portfolio

in ascending order, that is, from the greatest loss to the smallest. Readers will ﬁnd column AP in the ‘Losses Bonds’ sheet classiﬁed in ascending order on the ‘Bonds VaR’ sheet in column B. 1 − q is located to the right of that column. A. Numerical interpretation We say that: • There are 66 chances out of 100 that the actual loss will be −EUR917 or less (1 − q = 0.34), or VaR 0.66 = −917. • There are 90 chances out of 100 that the actual loss will be −EUR426 740 or less (1 − q = 0.10), or VaR 0.90 = −426 740. • There are 99 chances out of 100 that the actual loss will be −EUR1 523 685 or less (1 − q = 0.01), or VaR 0.99 = −1 523 685. B. Representation in graphical form If the loss estimates are shown on the x-axis and 1 − q is shown on the y-axis, the estimated loss distribution will be obtained. Figure 8.2 also appears on the ‘Bonds VaR’ sheet.

8.3 THE NORMALITY HYPOTHESIS We have stressed the hidden dangers of underestimating the risk where the hypothesis of normality is adopted. In fact, because of the leptokurtic nature of market observations, the normal law tails (VaR being interested speciﬁcally in extreme values) will report the observed historical frequencies poorly, as they will be too ﬂat. It is prudent, when using theoretical forecasts to simplify calculations, to overstate market risks; here, however, the opposite is the case. In order to explain the problem better we have compared the observed distribution for the bond portfolio in CH8.XLS with the normal theoretical distribution. The comparison is found on the ‘Calc N’ sheet (N = normal) and teaches us an interesting lesson with regard to the tails of these distributions.

Setting Up a VaR Methodology

253

We have used the estimated loss distribution of the bond portfolio (copied from the ‘Bonds VaR’ sheet). We have produced 26 categories (from −1 600 000 to −1 465 000, from −1 465 000 to −1 330 000 etc., up to 1 775 000 to 1 910 000) in which each of these 100 losses will be placed. For example, the loss of −1 523 685.01 (cell D4) will belong to the ﬁrst class (from −1 600 000 to −1 465 000, column G). The table G2–AF103 on the ‘Calc N’ sheet contains one class per column (lines G2–AF3) and 100 lines, that is, one per loss (column D4–D103). Where a given loss intersects with a class, there will be a ﬁgure of 0 (if the loss is not in the category in question) or 1 (if otherwise). By ﬁnding the total of 1s in a column, we will obtain the number of losses per class, or the frequency. Thus, a loss of between −1 600 000 and −1 465 000 has a frequency of 1 % (cell G104) and a loss of between 425 000 and 560 000 has a frequency of 13 % (cell V104). Cells AH2–AJ29 carry the category centres (−1 532 500 for the class −1 600 000 to −1 465 000), and the frequencies as a ﬁgure and a percentage. If we look at AH2 to AI29 in bar chart form, we will obtain the observed distribution for the bond portfolio (Figure 8.3) located in AL2 to AQ19. Now the normal distribution should be calculated. We have calculated the mean and standard deviation for the estimated distribution of the losses in D104 and D105, respectively. We have carried the losses to AS4 to AS103. Next, we have calculated the value of the normal density function (already set out in 1 1 x−µ 2 Section 3.4.2 ‘Continuous model’), that is, f (x) = √ , to each exp − 2 σ 2πσ loss in the bond portfolio (AT4 to AT103). If we plot this data on a graph, we will obtain (Figure 8.4) the graph located from AV2 to BB19. In order to compare these distributions (observed and theoretical), we have superimposed them; the calculations that allow this superimposition are located in the ‘Graph N’ sheet. As can be seen in Figures 8.3 and 8.4, the coordinates are proportional (factor 135 000 for class intervals). We have summarised the following in a table (B2 to D31 of the ‘Graph N’ sheet):

Observed distribution (Bonds Pf.) 14 % 12 % 10 % 8% 6% 4%

0%

–1 532 500 –1 397 500 –1 262 500 –1 127 500 –992 500 –857 500 –722 500 –587 500 –452 500 –317 500 –182 500 –47 500 87 500 222 500 357 500 492 500 627 500 762 500 897 500 1 032 500 1 167 500 1 302 500 1 437 500 1 572 500 1 707 500 1 842 500

2%

Figure 8.3 Observed distribution

254

Asset and Risk Management Normal distribution (Bonds Pf.) 0.0000007 0.0000006 0.0000005 0.0000004 0.0000003 0.0000002

–1 600 000

0.0000001 0 –850 000 –100 000

650 000

1 400 000

Figure 8.4 Normal distribution

Observed and normal distribution (Bonds Pf.) 2.E – 06 Observed dist. Normal dist.

1.E – 06 8.E – 07 6.E – 07 4.E – 07

0.E + 00

–1 532 500 –1 397 500 –1 262 500 –1 127 500 –992 500 –857 500 –722 500 –587 500 –452 500 –317 500 –182 500 –47 500 87 500 222 500 357 500 492 500 627 500 762 500 897 500 1 032 500 1 167 500 1 302 500 1 437 500 1 572 500 1 707 500 1 842 500

2.E – 07

Figure 8.5 Normal and observed distributions

• the class centres; • the observed frequencies relating to them; • the normal coordinates relative to each class centre. It is therefore possible (Figure 8.5) to construct a graph, located in E2 to N32, which is the result of the superimposition of the two types of distribution types. We may observe an underestimation of the frequency through normal law in distribution tails, which further conﬁrms the leptokurtic nature of the ﬁnancial markets.

Part IV From Risk Management to Asset Management

Introduction 9 Portfolio Risk Management 10 Optimising the Global Portfolio via VaR 11 Institutional Management: APT Applied to Investment Funds

256

Asset and Risk Management

Introduction Although risk management methods have been used ﬁrst and foremost to quantify market risks relative to market transactions, these techniques tend to be generalised especially if one wishes to gain a comprehensive understanding of the risks inherent in the management of institutional portfolios (investment funds, hedge funds, pension funds) and private portfolios (private banking and other wealth management methods). In this convergence between asset management on the one hand and risk management on the other, towards what we term the discipline of ‘asset and risk management’, we are arriving, especially in the ﬁeld of individual client portfolio management, at ‘portfolio risk management’, which is the subject of Chapter 9. Next, we will look at methods for optimising asset portfolios that verify normal law hypotheses, which is especially the case with equities.1 In particular, we will be adapting two known portfolio optimisation methods: • Sharpe’s simple index method (see Section 3.2.4) and the EGP method (see Section 3.2.6). • VaR (see Chapter 6); we will be seeing the extent to which VaR improves the optimisation. To close this fourth part, we will see how the APT model described in Section 3.3.2 allows investment funds to be analysed in behavioural terms.

Asset management Fund management Portfolio management

• Asset allocation & market timing • Stock picking • Currency allocation

• Portfolio risk management • Fund risk management

Asset and risk management

• • • • •

Stop loss Credit equivalent VBP VaR MRO

Risk management

Figure P1 Asset and risk management

1

In fact, the statistical distribution of an equity is leptokurtic but becomes normal over a sufﬁciently long period.

9 Portfolio Risk Management1 9.1 GENERAL PRINCIPLES This involves application of the following: • To portfolios managed traditionally, that is, using: — asset allocation with a greater or lesser risk proﬁle (including, implicitly, market timing); — a choice of speciﬁc securities within the category of equities or options (stock picking); — currency allocation. • To particularly high-risk portfolios (said to have a ‘high leverage effect’) falling clearly outside the scope of traditional management (the most frequent case), a ﬁvefold risk management method that allows: — daily monitoring by the client (and intraday monitoring if market conditions require) of the market risks to which he or she is exposed given the composition of his or her portfolio. — monitoring of equal regularity by the banker (or wealth manager where applicable) of the client positions for which he or she is by nature the only person responsible. Paradoxically (at least initially) it is this second point that is essential for the client, since this ability to monitor credit risk with the use of modern and online tools allows the banker to minimise the client’s need to provide collateral, something that earns little or nothing.

9.2 PORTFOLIO RISK MANAGEMENT METHOD Let us take the case of the particularly high-risk portfolios, including derivatives: • linear portfolios (such as FRA, IRS, currency swaps and other forward FX); • nonlinear portfolios (options); that is highly leveraged portfolios. In order to minimise the need for collateral under this type of portfolio wherever possible, the pledging agreement may include clauses that provide for a risk-monitoring framework, which will suppose rights and obligations on the part of the contractual parties: • The banker (wealth manager) reports on the market risks (interest rates, FX, prices etc.) thus helping the client to manage the portfolio. 1

Lopez T., Delimiting portfolio risk, Banque Magazine, No. 605, July–August 1999, pp. 44–6.

258

Asset and Risk Management

• The client undertakes to respect the risk criteria (by complying with the limits) set out in the clauses, authorising the bank (under certain conditions) to act in his name and on his behalf if the limits in question are breached. A portfolio risk management mandate generally consists of two parts: • the investment strategy; • the risk framework. 9.2.1 Investment strategy This part sets out: • • • •

The The The The

portfolio management strategy. responsibilities of each of the parties. maximum maturity dates of the transactions. nature of the transactions.

9.2.2 Risk framework In order to determine the risks and limits associated with the portfolio, the following four limits will be taken into consideration, each of which may not be exceeded. 1. 2. 3. 4.

The The The The

stop loss limit for the portfolio. maximum credit equivalent limit. upper VBP (value of one basis point) limit for the portfolio. upper VaR (Value at Risk) limit for the portfolio.

For each measure, one should be in a position to calculate: • the limit; • the outstanding to be compared to the limit. 9.2.2.1 The portfolio stop loss With regard to the limit, the potential global loss on the portfolio (deﬁned below) can never exceed x % of the cash equivalent of the portfolio, the portfolio being deﬁned as the sum of: • the available cash balances, on one hand; • the realisation value of the assets included in the portfolio, on the other hand. The percentage of the cash equivalent of the portfolio, termed the stop loss, is determined jointly by the bank and the client, depending on the client’s degree of aversion to the risk, based in turn on the degree of leverage within the portfolio. For the outstanding, the total potential loss on the portfolio is the sum of the differences between:

Portfolio Risk Management

259

• the value of its constituent assets at the initiation of each transaction; • the value of those same assets on the valuation date; Each of these must be less than zero for them to apply. Example Imagine a portfolio of EUR100 invested in ﬁve equities ABC at EUR10 per share and ﬁve equities XYZ at EUR5 per share at 1 January. If the value of ABC changes to EUR11 and that of XYZ to EUR4 on the next day, the potential decrease in value on XYZ (loss of EUR1 on 10 equities in XYZ) will be taken into account for determining the potential overall loss on the portfolio. The EUR5 increase in value on the ABC equities (gain of EUR1 on ﬁve equities ABC) will, however, be excluded. The overall loss will therefore be −EUR10. The cash equivalent of the portfolio will total EUR95, that is, the total arising from the sale of all the assets in the portfolio. This produces a stop loss equal to 20 % of the portfolio cash equivalent (20 % of EUR95 or 19). See Table 9.1. 9.2.2.2 Maximum credit equivalent limit The credit limit totals the cash equivalent of the portfolio (deﬁned in the ‘portfolio stop loss’ section). The credit liabilities, which consist of the sum of the credit equivalents deﬁned below, must be equal to or less than the cash equivalent of the portfolio. The credit equivalent calculation consists of producing an equivalent value weighting to base products or their derivatives; these may or may not be linear. The weighting will be a function of the intrinsic risk relative to each product (Figure 9.1) and will therefore depend on whether or not the product: • involves exchange of principal (for example, a spot FX involves an exchange of principal whereas a forward FX deal will defer this to a later date); • involves a contingent obligation (if options are issued); • involves a contingent right (if options are purchased); Table 9.1

Stop loss

Stop loss

Potential loss

Use of limit

−EUR10

52.63 %

EUR19

+

Credit risk Spot Option issues Option purchase Forward Fx

– Figure 9.1 Weight of the credit equivalent

FRA, IRS and currency swaps

260

Asset and Risk Management

• the product price (if no exchange of principal is supposed) is linked to one variable (interest rate for FRA, IRS and currency swaps) or two variables (interest rates and spot in the case of forward FX). We could for example determine credit usage per product as follows: 1. For spot cash payments, 100 % of the nominal of the principal currency. 2. For the sale of options, the notional for the underlying principal currency, multiplied by the forward delta. 3. For the purchase of options, 100 % of the premium paid. 4. For other products, each position opened in the portfolio would be the subject of a daily economic revaluation (mark-to-market). The total potential loss arising would be taken (gains being excluded) and multiplied by a weighting factor (taking account of the volatility of the asset value) equal to 100 % + x % + y % for future exchanges and 100 % + x % for FRA, IRS and currency swaps, x and y always being strictly positive amounts. Example Here is a portfolio consisting of ﬁve assets (Tables 9.2 and 9.3). The revaluation prices are shown in Table 9.4. Table 9.2 FX products Product

P/S

Spot Six-month future

S P

Currency

Nom.

P/S

EUR USD

5 m 10 m

P S

Currency USD JPY

Nom.

Spot

Forward

5.5 million 1170 million

1.1 120

– 117

Table 9.3 FX derivatives and FRA Product

P/S

Currency

Nominal

P S S

EUR/USD GBP/JPY DKK

EUR11 million £5 million 100 million

Three-month call Strike 1.1 Two-month put Strike 195.5 FRA 3–6

Price/premium EUR220 000 GBP122 000 3.3 %

Table 9.4 Revaluation price Product Spot FX forward Long call Short put FRA Total

Historical price

Current price

1.1 117 2.00 % nom. EUR 2.44 % nom. GBP 3.3 %

1.12 114.5 2.10 % nom. EUR 2.48 % nom. GBP 3.4 %

Loss (currency)

Potential loss (EUR)

−100 000 −25 million +11 000 −2000 −25 000

−89 285.71 −189 969.60 +11 000 −3034.90 −3363.38 −274653.59

Portfolio Risk Management

261

Table 9.5 Credit equivalent agreements Product

Credit equivalent

Spot FX forward FX options purchase FX options sale FRA

100 % of nominal of principal currency 110 % of potential loss ( χ 2 ) step by step. The exclusion of variables is conditioned in all cases by the degree of adjustment of the model. The rate of concordance between the model and the observed reality must be maximised. The SAS output will be association of predicted probabilities and observed responses – concordant: 97.9 %. In the following example (Table 12.8), the variable Mc10yr has a probability of 76.59 % of being statistically zero. Excluding it will lead to deterioration in the rate of concordance between the observations (repricing–non-repricing) and the forecasts for the model (repricing–non-repricing). This variable must remain in the model. There are other criteria for measuring the performance of a logistic regression, such as the logarithm of likelihood. The closer the log of likelihood is to zero, the better the adjustment of the model to the observed reality (−2log L in SAS output). The log of likelihood can also be approximated by the MacFadden R 2 : R 2 = 1 (−2log L intercept only/−2log L intercept and covariates).

310

Asset and Risk Management

Table 12.8 Logistic regression Variables

DF

Parameter estimate

Standard error

Wald chi-square

Proba over chi-square

Odds ratio12

Constant Time M3m Ma3m Ma10y Mc3m Mac3m Mac5y Mc10y Mac10y

1 1 1 1 1 1 1 1 1 1

35.468 −0.2669 0.3231 −5.9101 0.9997 0.0335 −0.0731 0.1029 0.0227 −0.1146

12.1283 0.2500 0.1549 3.4407 0.7190 0.0709 0.0447 0.1041 0.0762 0.102

8.5522 1.1404 4.3512 2.9504 1.9333 0.2236 2.6772 0.9766 0.0887 1.2618

0.0035 0.2856 0.0370 0.0859 0.1644 0.6363 0.1018 0.323 0.7659 0.2613

0.766 1.381 0.003 2.718 1.034 0.929 1.108 1.023 0.892

Association of predicted probabilities and observed responses Concordant = 97.9 % Discordant = 2.1 % Tried = 0 % (1692 pairs)

In the model, the probability of a change in rate increases with: • • • • •

time; the fall in the static spread A at 3 months; the rise in the static spread B at 3 months; the fall in the static spread A 10 years; the slowing of the rise in the dynamic spreads A 3 months, B 5 months and A 10 years; • the rise in the dynamic margins B 3 months and B 10 years. Displaying the model For each model, the linear combination on the historical data must be programmed. This will allow the critical value of the model needed for dissociating the repricing periods from the periods of equilibrium to be determined. As the dissociation is not 100 %, there is no objective value. The critical value chosen conditions the statistical error of the ﬁrst and second area. In the example, the value 1.11 allows almost all the repricing to be obtained without much anticipation of the model for the actual repricing periods (see model CD-ROM and critical value). The method presented was applied to all the ﬂoating-rate products for a bank every two months for nine years maximum in the period 1991 to 1999, depending on the historical data available and the creation date of the products. The results are encouraging as the rates of convergence between the models and the observed reality, with just a few exceptions, are all over 90 %. The classic method, based on the choice of dynamic and static spreads through simple statistical correlation, has been tested. This method shows results very far removed from those obtained using the method proposed, as the rate of concordance of pairs was less than 80 %. 12 The odds ratio is equal to the exponential of the parameter estimated: eb . A variation in a unit within the variable (here time and the spreads) makes the probability of ‘repricing’ alter by 1 − eb .

Techniques for Measuring Structural Risks in Balance Sheets

311

12.4.3.3 Use of the models in rate risk management This behavioural study allows the arbitrary rate-change conventions to be replaced to good advantage. Remember that the conventions in the interest-rate gaps often take the form of a simple calculation of an average for the periods during which rates are not changed. Working on the hypothesis that the bank’s behaviour is stable, we can use each model as a prospective by calculating the static and dynamic spreads on the basis of the sliding forward rates, for example over one year. This ﬂoating-rate integration method gives us two cases: • The rate change occurs between today’s date and one year from now. In this case, the contract revision date will be precisely on that date. • The rate change is not probable over a one-year horizon. In this case, the date of revision may be put back to the most distant prospective date (in our example, in one year). Naturally, using an interest-rate gap suggests in the ﬁrst instance that the rate-change dates are known for each contract, but also that the magnitude of the change can be anticipated in order to assess the change in the interest margin. Our method satisﬁes the ﬁrst condition but does not directly give us the magnitude of the change. In fact, between two repricing periods we see a large number of situations of equilibrium. In practice, the ALM manager can put this free space to good use to optimise the magnitude of the change and proﬁt from a long or short balance-sheet position. This optimisation process is made easier by the model. In fact, a change with too low a magnitude will necessitate a further change, while a change with too high a magnitude may be incompatible with the historical values of the model (see the statistics for magnitude of changes). Modelling the repricing improves knowledge of the rate risk and optimises the simulations on the interest margin forecasts and the knowledge of the market risk through VaR. 12.4.3.4 Remarks and criticisms Our behavioural approach does, however, have a few weak points. The model speciﬁes the revision dates without indicating the total change in terms of basis points. It is not a margin optimisation model. Another criticism that can be levelled relates to the homogeneity of the period studied. A major change in one or more of the parameters set out previously could disrupt or invalidate the model estimated. Finally, this empirical method cannot be applied to new ﬂoating-rate products. Despite these limitations, the behavioural approach to static and dynamic spreads, based on the analysis of canonical correlations, gives good results and is sufﬁciently ﬂexible to explain changes in rates on very different products. In fact, in our bank’s balance sheet, we have both liability and asset products each with their own speciﬁc client segmentation. The behavioural method allows complex parameters to be integrated, such as the business policy of banks, the sensitivity of adjustment of volumes to market interest rates, and competition environment.

12.5 REPLICATING PORTFOLIOS In asset and liability management, a measurement of the monthly VaR for all the assets as a whole is information of ﬁrst importance on the market risk (rate and change). It is a measurement that allows the economic forecasts associated with the risk to be assessed.

312

Asset and Risk Management

ALM software packages most frequently use J. P. Morgan’s interest and exchange rate variance-covariance matrix, as the information on duration necessary for making the calculation is already available. It is well known that products without a maturity date are a real stumbling block for this type of VaR and for ALM. There is relatively little academic work on the studies that involve attributing maturity dates to demand credit or debit products. The aim of ‘replicating portfolios’ is to attribute a maturity date to balance-sheet products that do not have one. These portfolios combine all the statistical or conventional techniques that allow the position of a product without a maturity date to be converted into an interwoven whole of contracts that are homogeneous in terms of liquidity and duration. ‘Replicating portfolios’ can be constructed in different ways. If the technical environment allows, it is possible to construct them contract by contract, deﬁning development proﬁles and therefore implicit maturity dates for ‘stable’ contracts. Where necessary, on the basis of volumes per type of product, the optimal value method may be used. Other banks provide too arbitrary deﬁnitions of replicating portfolios. 12.5.1 Presentation of replicating portfolios Many products do not have a certain maturity date, including, among others, the following cases: • American options that can be exercised at any time outside the scope of the balance sheet. • Demand advances and overcharges on assets. • Current liability accounts. The banks construct replicating portfolios in order to deal with this problem. This kind of portfolio uses statistical techniques or conventions. The assigned aim of all the methods is to transform an accounting balance of demand products into a number of contracts with differing characteristics (maturity, origin, depreciation proﬁle, internal transfer rate etc.). At the time of the analysis, the accounting balance of the whole contract portfolio is equal to the accounting balance of the demand product. Figures 12.2–12.4 offers a better understanding of replicating portfolio construction. The replicating portfolio presented consists of three different contracts that explain the accounting balances at t−1 , t0 and t1 . The aim of the replicating portfolio is to represent the structure of the ﬂows that make up the accounting balance. Accounting balances Thousands of millions

100 80 60 90

80

t0 Periods

t1

40 60 20 0

t–1

Figure 12.2 Accounting balances on current accounts

Thousands of millions

Techniques for Measuring Structural Risks in Balance Sheets Contract 1

50

Contract 2

25 20

40

30

15

30

40

40

40

20

10

10

5

0

0

t–1

t0

Contract 3

50

40

313

t1

20

20 0 t–1

10

10 t0

0

t1

30

30

t0

t1

40

40

t2

t3

20 t–1

Figure 12.3 Contracts making up the replicating portfolio

Contract 2 Contract 3 Contract 1 60 IN t–1

90 IN t0

80 IN t1

Figure 12.4 Replicating portfolio constructed on the basis of the three contracts

12.5.2 Replicating portfolios constructed according to convention To present the various methods, we are taking the example of current accounts. There are two types of convention for constructing a replicating portfolio. The ﬁrst type can be described as simplistic; they are used especially for demand deposits with an apparently stable monthly balance. On the basis of this observation, some banks construct the replicating portfolio by applying linear depreciation to the accounting balance at moment t over several months. As the depreciation is linear over several months or even several years, the banking institutions consider that the structure of the ﬂows making up the accounting balance is stable overall in the short term. In fact, only 1/12 of the balance is depreciated at the end of one month (1/6 in the second month, etc.) in a replicating portfolio constructed over 12 months. This arbitrary technique, which has no statistical basis, is unsatisfactory as many current accounts are partially or totally depreciated over one month because of the monthly nature of the income. The second class of conventions covers the conventions that are considered as more sophisticated and these do call in part on statistical studies. Because of the very restrictive hypotheses retained, construction of the replicating portfolio remains within the scope of convention. For example: we calculate two well-known statistical indicators to assess a volatile item like the arithmetical mean and the monthly standard deviation for the daily balances of all the deposits. The operation is repeated every two months, every quarter etc. in order to obtain the statistical volatility indicators (average, standard deviation) on a temporal horizon that increases from month to month. The interest, of course, is in making the calculation over several years in order to reﬁne support for stable resources for long-term functions such as credit facilities. Thanks to these indicators it is possible, using probability theory, to calculate the monthly portion of the deposits that will be depreciated month by month. For example:

314

Asset and Risk Management

to deﬁne the unstable portion of deposits for one month, we calculate ﬁrst the probability that the current account will be in debit balance compared to the monthly average and the standard deviation for the totals observed over the month. The probability obtained is equal to the unstable proportion for one month. We can also write in the general case that the probability associated with the percentage of deposits depreciated equals Pr[x < 0] with σ as the standard deviation over a period (one or two months etc.) for the daily totals of deposits and µ as the arithmetical mean for the deposits over the same period. With this method, part of the deposits is depreciated or deducted each month until depreciation is complete. In other words, the balance sheet is deﬂated. For example: the demand deposit entry in the balance sheet represents EUR10 000 million, and this sum will be broken down into monthly due dates that generally cover several years. Naturally, this convention for constructing a replicating portfolio is more satisfying than a simple arbitrary convention. Some serious weaknesses have, however, been noted. In fact, if we have a product with a credit balance, the proportion depreciated during the ﬁrst month will be the probability of the balance becoming a debit balance in view of the monthly arithmetical mean and standard deviation calculated and observed. Under this approach, the instability amounts to the probability of having a debit balance (for a product in liabilities) or a credit balance (for a product in assets). It is considered that the credit positions capable of being debited to a considerable extent are probably stable! This shows the limits of the approach built on the global basis balance or the practice of producing the total accounting position day by day. 12.5.3 The contract-by-contract replicating portfolio The other methods consist of producing more accurate projections for demand products on the basis of statistical analyses. The ﬁrst prerequisite for a statistical analysis to be consistent is to identify correctly each component that explains the overall development. In other words, the statistical analysis builds up the replicating portfolio account by account and day by day. The portfolio is not built on the daily accounting balance that brings together the behaviour of all the accounts. The banks allocate one account per type of product and per client. The account-by-account analysis is more reﬁned as it allows the behaviour of the ﬂows to be identiﬁed per type of client. The account-by-account daily analysis includes technical problems of database constitution, including those in the large system or ‘mainframe’ environment, because of the volume created by the large number of current accounts or cheques and the need for historical entries. After the completion of this ﬁrst stage, considerable thought was applied to deﬁne the concept of stability in theoretical terms. To carry out this work, we used two concepts: • The ﬁrst was the method of the account-by-account replicating portfolio. We considered that the balance observed at moment t is the product of a whole set of interwoven accounts with different proﬁles and cashﬂow behaviour and nonsimultaneous creation dates. • The second concept is the stability test, adopted for deﬁning a stable account statistically. The test used is the standardised range or SR. This is a practical test used to judge the normality of a statistical distribution, as it is easy to interpret and calculate. SR is a measurement of the extent of the extreme values in the observations for a sample

Techniques for Measuring Structural Risks in Balance Sheets

315

dispersion unit (the standard deviation13 ). It is expressed as follows: SR =

max(Xi ) − min(Xi ) σX

This test allows three types of statistical distribution to be identiﬁed: a normal or Gaussian distribution, a ﬂat distribution with higher statistical dispersion than that of a normal law, and a distribution with a statistical dispersion lower than that of a normal law. It can be considered that a demand current account is stable within the third typology. The difference between the extreme values, max(Xi ) − min(Xi ), is low because of the standard different. The SR statistical test can be carried out with several intervals of conﬁdence, and the test can be programmed with differentiated intervals of conﬁdence. It is preferable to use a wide interval of conﬁdence to judge the daily stability of the account in order to avoid the problem of making monthly income payments. In addition, the second condition for daily account stability is the absence of debit balances in the monthly historical period values. In a monthly historical period, it is preferable to take a wider interval of conﬁdence when the history of the deposits shows at least one debit balance, and a narrower interval otherwise. After the stable accounts have been identiﬁed, we can reasonably create repayment schedules by extending the trends or historical tendencies. On the statistically stable accounts, two major trend types exist. In the upward trend, the deposits are stable over a long term and the total observed at moment t will therefore be depreciated once over a long period; this may be the date of the historical basis. In the downward trend, it is possible by prolonging the trend to ﬁnd out the future date of complete depreciation of the account. Therefore, the balance of the account at moment t is depreciated linearly until the maturity date obtained by prolonging the trend. In order to provide an explanation, we have synthesised the conditions of stability in Table 12.9. We have identiﬁed four cases. ‘SR max’ corresponds to a wide interval of conﬁdence, while ‘SR min’ corresponds to a narrower interval of conﬁdence. Table 12.9 Stability typologies on current account deposits Type of case

Daily stability

Monthly stability

Historical monthly balances

Type of trend

Maturity date

1

Yes (SR max)

Yes (SR min)

2

Yes (SR max)

Yes (SR min)

Upward & horizontal Downward

3

Yes (SR max)

Yes (SR max)

4

Yes (SR max)

No (SR min)

Always in credit Always in credit At least one debit balance Always in credit

Duration of history of data Duration of trend prolongation Duration of history of data Duration of history of data (for historical min. total)

Generally upward No trend

13 There are of course other statistical tests for measuring the normality of a statistical distribution, such as the χ 2 test, the Kolmogorov–Smirnov test for samples with over 2000 contracts, and the Wilk–Shapiro test where needed.

316

Asset and Risk Management

The fourth case requires further explanation. These accounts are always in a credit balance on the daily and monthly histories, but are not stable on a monthly basis. On the other hand, there is a historical minimum credit balance that can be considered to be stable. Economists name this liquidity as ‘liquidity preference’. In this case, the minimum historical total will be found in the long-term repayment schedule (the database date). The unstable contracts, or the unstable part of a contract, will have a short-term maturity date (1 day to 1 month). This method will allow better integration of products into the liquidity management tools and rate risk without maturity dates. Based on the SR test and the account-by-account replicating portfolio, it is simple in design and easy to carry out technically. Speciﬁcally, an accounting position of 120 will be broken down as follows. The unstable part will have a maturity date of one day or one month, and the stable part will be broken down two months from the date of the historical period. If the history and therefore the synthetic maturity dates are judged insufﬁcient, especially on savings products without maturity dates, it is possible under certain hypotheses to extrapolate the stability level and deﬁne a long maturity period over smaller totals. The historical period is 12 months. A volume of 100 out of the 130 observed is deﬁned as stable. The maturity period is therefore one year. It is also known that the volatility of a ﬁnancial variable calculated over a year can be used as a basis for extrapolating the volatility calculated over √ two years by multiplying the standard deviation by the time root: σ2 years = σ1 year · 2. It can be considered that the stable part diminishes symmetrically in proportion to time. √ The stable part at ﬁve years can thus be deﬁned: 100 · 1/ 5 = 100 · 0.447 = 44.7 %. We therefore have 30 at one day, 55.27 at one year and 44.73 at ﬁve years. The stability obtained on the basis of a monthly and daily history therefore takes overall account of the explanatory variables of instability (arbitrage behaviour, monthly payment of income, liquidity preference, anticipation of rates, seasonality etc.). In this method, the interest rate is an exogenous variable. The link between changes in stability and interest rates therefore depends on the frequency of the stability analysis. It allows speciﬁc implicit maturity dates to be found while remaining a powerful tool for allocating resources on a product without a maturity date located among the assets. For a liability bank, a good knowledge of ﬂows will allow resources to be replaced over the long term instead of the interbank system and therefore provide an additional margin if the rate curve is positive. For an asset bank, this procedure will allow better management of the liquidity risk and the rate risk. Contrarily, this historical and behavioural approach to the replicating portfolios poses problems when rate simulations are carried out in ALM. In the absence of an endogenous rate variable, knowledge of the link between rate and replicating portfolio will be limited to history. This last point justiﬁes the replicating portfolio searches that include interest rates in the modelling process. 12.5.4 Replicating portfolios with the optimal value method 12.5.4.1 Presentation of the method This method was developed by Smithson14 in 1990 according to the ‘building approach’ or ‘Lego approach’. The method proposes a deﬁnition of optimal replicating portfolios 14 Smithson C., A Lego approach to ﬁnancial engineering. In The Handbook of Currency and Interest Rate Risk Management, edited by R. Schwarz and C. W. Smith Jr., New York Institute of Finance, 1990 or Damel P., “L’apport de replicating portfolio ou portefeuille r´epliqu´e en ALM: m´ethode contrat par contrat ou par la valeur optimale”, Banque et March´es, mars avril, 2001.

Techniques for Measuring Structural Risks in Balance Sheets

317

by integrating market interest rates and the anticipated repayment risk, and considers the interest rate(s) to be endogenous variables. This perspective is much more limited than the previous one when the bank carries out stochastic or other rate simulations on the ALM indicators (VaR, NPV for equity funds, interest margins etc.). In this method, it is considered that the stable part of a product without a maturity date is a function of simple rate contracts with known maturity dates. In this problem, the deﬁnition of stability is not provided contract by contract but on the basis of daily or monthly accounting volumes. An equation allows optimal representation of the chronological series of the accounting positions. This ﬁrst point deﬁnes a stable part and a volatile part that is the statistical residue of the stability equation. The volatile part is represented by a short-term bond with a short-term monetary reference rate (such as one month). The stable part consists of a number of interwoven zero-coupon bonds with reference rates and maturity dates from 3 months to 15 years. The weave deﬁnes a reﬁnancing strategy based on the monetary market and the primary bond market. The stable part consists of rate products. The advantage of this approach is therefore that the early repayment rate is taken into account together with any ‘repricing’ of the product and the volume is therefore linked to the reference interest rates. The model contains two principal equations. • Volum t represents the accounting position at moment t. • Stab t represents the stable part of the volume at moment t. • rrt is the rate for the product at moment t and taux1m, taux2m etc. represent the market reference rates for maturity rates 1 month, 2 months etc. • εt represents the statistical residual or volatile part of the accounting positions. • brit represents an interest for a zero-coupon bond position with maturity date i and market reference rate i at time t • αi represents the stable part replicated by the brit position. • αi equals 1 (i = 3 months to 15 years). • mrt represents the portion of the demand product rate that is not a function of the market rate. mrt is also equal to the difference between the average weighted rate obtained from the interwoven bonds and the ﬂoating or ﬁxed retail rate. This last point also includes the repricing strategy and the spread, which will be negative on liability products and positive on asset products. Wilson15 was the ﬁrst to use this approach speciﬁcally for optimal value. His equations can be presented as follows: Volum t = Stab t + εt

(a)

15 years

Volum t · rrt = εt · r1

month,t

+

αi brit + mrt + δt

i=3 months

with the constraint: 15

15

years i=3 months αi

= 1.

Wilson T., Optimal value: portfolio theory, Balance Sheet, Vol. 3, No. 3, Autumn 1994.

(b)

318

Asset and Risk Management

Example of replicated zero-coupon position br6m is a bond with a six-month maturity date and a market reference rate of six months. It will be considered that the stable part in t1 is invested in a six-month bond at a six-month market rate. At t2 , t3 , t4 , t5 and t6 the new deposits (difference between Stab t−1 and Stab t ) are also placed in a six-month bond with a six-month reference market rate for t2 , t3 , t4 , t5 and t6 . At t7 the stable part invested at t1 has matured. This stable party and the new deposits will be replaced at six months at the six-month market rate prevailing at t = 7. brit functions with all the reference rates from three months to 15 years. After econometric adjustment of this two-equation model, αi readily gives us the duration of this demand product. The addition properties of the duration are used. If α1y = 0.5 and α2y = 0.5, the duration of this product without a maturity date will be 18 months. 12.5.4.2 Econometric adjustment of equations A. The stability or deﬁnition equation There are many different forecasting models for the chronological series. For upward accounting volumes, the equation will be different from that obtained from decreasing or sine wave accounting values. The equation to be adopted will be the one that minimises the term of error ε. Here follows a list (not comprehensive) of the various techniques for forecasting a chronological series: • • • •

regression; trend extrapolations; exponential smoothing; autoregressive moving average (ARMA).

Wilson uses exponential smoothing. The stability of the volumes is an exponential function of time, Stab t = b0 · eb1 t + εt or log Stab t = log b0 + b1 · t + δt Instead of this arbitrary formula, we propose to deﬁne the volumes on the basis of classical methods or recent research into chance market models specialised in during the temporal series study. These models are much better adapted for estimating temporal series. The ARMA model is a classical model; it considers that the volumes observed are produced by a random stable process, that is, the statistical properties do not change over the course of time. The variables in the process (that is, mathematical anticipation, valuation–valuation) are independent of time and follow a Gaussian distribution. The variation must also be ﬁnished. Volumes will be observed at equidistant moments (case of process in discrete time). We will take as an example the ﬂoating-demand savings accounts in LUF/BEF

Techniques for Measuring Structural Risks in Balance Sheets

319

from 1996 to 1999, observed monthly (data on CD-ROM). The form given in the model is that of the recurrence system, Volum t = a0 +

p

ai Volum t−i + εt

i=1

where a0 + a1 Volum t−1 + . . . + aP Volum t−p represents the autoregressive model that is ideal or perfectly adjusted to the chronological series, thus being devoid of uncertainty, and εt is a mobile average process. εt =

q

bi ut−i

i=0

The ut−I values constitute ‘white noise’ (following the non-autocorrelated and centred normal random variables with average 0 and standard deviation equal to 1). εt is therefore a centred random variable with constant variance. This type of model is an ARMA type model (p, q). Optimisation of ARMA model (p, q) The ﬁrst stage consists of constructing the model on the observed data without transformation (Volum t ). The ﬁrst solution is to test several ARMA models (p, 1) and to select the model that maximises the usual adjustment criteria: • The function of log of likelihood. Box and Jenkins propose the lowest square estimators (R-square in the example or adjusted), identical to the maximum likelihood estimators if it is considered that the random variables are distributed normally. This last point is consistent with the ARMA approach. • AIC (Akaike’s information criterion). • Schwartz criteria. • There are other criteria, not referenced in the example (FPE: ﬁnal prediction error; BIC: Bayesian information criterion; Parsen CAT: criterion of autoregressive transfer function). The other process consists of constructing the model on the basis of the graphic autocorrelation test. This stage of identiﬁcation takes account of the autocorrelation test with all the possible intervals (t − n). This autocorrelation function must be downward or depreciated oscillating. In the example, the graph shows on the basis of the bilateral Student test (t = 1.96) that the one- and two-period intervals have an autocorrelation signiﬁcantly different from 0 at the conﬁdence threshold of 5 %. The ARMA model will have an AR component equal to two (AR(2)). This stage may be completed in a similar way by partial autocorrelation, which takes account of the effects of the intermediate values between Volum t and Volum t+r in the autocorrelation. The model to be tested is ARMA (2, 0). The random disturbances in the model must not be autocorrelated. Where applicable, the autocorrelations have not been included in the AR part. There are different tests, including the Durbin–Watson

320

Asset and Risk Management

Table 12.10 ARMA (2, 2) model R-square = 0.7251 Akaike Information Criteria − AIC(K) = 43.539 Schwartz Criteria − SC(K) = 43.777 Parameter estimates AR(1) 0.35356 AR(2) 0.40966 MA(1) 0.2135E-3 MA(2) − 0.91454 Constant 0.90774E + 10 Residuals Skewness 1.44 Kurtosis 7.51 Studentised range 5.33

Adjusted R-square = 0.6773 STD error T-STAT 0.1951 1.812 0.2127 1.926 0.1078 0.0019 0.05865 −15.59 0.7884E + 10 1.151

non-autocorrelation error test. In the example of the savings accounts, the optimal ARMA model with a distribution normal and noncorrelated residue is the ARMA (2, 2) model with an acceptable R 2 of 0.67. This model is therefore stationary, as the AR total is less than 1. The ARMA (2, 2) model (Table 12.10) obtained is as follows. The monthly accounting data, the zero-coupon rates for 1 month, 6 months, 1 year, 2 years, 4 years, 7 years and 10 years can be found on the CD-ROM. The model presented has been calculated on the basis of data from end November 1996 to end February 1999. If the model is nonstationary (nonstationary variance and/or mean), it can be converted into a stationary model by using the integration of order r after the logarithmic transformation : if y is the transformed variable, apply the technique to ((. . . (yt ))) − r times− instead of yt ((yt ) = yt − yt−1 ). We therefore use an ARIMA(p, r, q) procedure.16 If this procedure fails because of nonconstant volatility in the error term, it will be necessary to use the ARCH-GARCH or EGARCH models (Appendix 7). B. The equation on the replicated positions This equation may be estimated by a statistical model (such as SAS/OR procedure PROC NPL), using multiple regression with the constraints

15 years

αi = 1 and αi ≥ 0

i=3 months

It is also possible to estimate the replicated positions (b) with the single constraint (by using the SAS/STAT procedure)

15 years

αi = 1

i=3 months

In both cases, the duration of the demand product is a weighted average of the durations. In the second case, it is possible to obtain negative αi values. We therefore have a synthetic investment loan position on which the duration is calculated. 16

Autoregressive integrated moving average.

Techniques for Measuring Structural Risks in Balance Sheets

321

Table 12.11 Multiple regression model obtained on BEF/LUF savings accounts on the basis of a SAS/STAT procedure (adjusted R-square 0.9431) Variables Intercept (global margin) F1M (stable part) F6M (stable rollover) F1Y (stable rollover) F2Y (stable rollover) F4Y (stable rollover) F7Y (stable rollover) F10Y (stable rollover)

Parameter estimate −92 843 024 0.086084 −0.015703 0.036787 0.127688 3.490592 −4.524331 1.884966

Standard error 224 898 959 0.00583247 0.05014466 0.07878570 0.14488236 1.46300205 2.94918687 1.63778119

Prob > (T ) 0.6839 0.0001 0.7573 0.6454 0.3881 0.0265 0.1399 0.2627

If α1y = 2.6 and α6m = −1.6 for a liability product, duration = 1(1.6/2.6)0.5 = 0.69 of a year. The bond weaves on the stable part have been calculated on the basis of the zero-coupon rates (1 month, 6 months, 1 year, 2 years, 4 years, 7 years, 10 years). See Table 12.11. The equation (b) is very well adjusted, as R 2 is 94.31 %. The interest margin is of course negative, as the cost of the resources on liabilities is lower than the market conditions. Like Wilson, we consider that the margin between the average rate for the interwoven bonds and the product rate is constant over the period. Possibly it should also be considered that the margin is not constant, as the ﬂoating rate is not instantaneously re-updated according to changes in market rates. On the other hand, the quality of the clients and therefore the spread of credit are not necessarily constant over the period. The sum of coefﬁcients associated with the interwoven bond positions is 1. This multiple linear regression allows us to calculate the duration of this product without a maturity date on the basis of the synthetic bond positions obtained. In the example, the duration obtained from the unstable and stable positions equals 1.42 years.

Appendices

1 2 3 4 5 6 7 8

Mathematical concepts Probabilistic concepts Statistical concepts Extreme value theory Canonical correlations Algebraic presentation of logistic regression Time series models: ARCH-GARCH and EGARCH Numerical methods for solving nonlinear equations

Appendix 1 Mathematical Concepts1 1.1 FUNCTIONS OF ONE VARIABLE 1.1.1 Derivatives 1.1.1.1 Deﬁnition f (x0 + h)−f (x0 ) The derivative2 of function f at point x0 is deﬁned as f (x0 ) = limh→0 , h if this limit exists and is ﬁnite. If the function f is derivable at every point within an open interval ]a; b[, it will constitute a new function deﬁned within that interval: the derivative function, termed f . 1.1.1.2 Geometric interpretations For a small value of h, the numerator in the deﬁnition represents the increase (or decrease) in the value of the function when the variable x passes from value x0 to the neighbouring value (x0 + h), that is, the length of AB (see Figure A1.1). The denominator in the same expression, h, is in turn equal to the length of AC. The ratio is therefore equal to the slope of the straight line BC. When h tends towards 0, this straight line BC moves towards the tangent on the function graph, at point C. The geometric interpretation of the derivative is therefore as follows: f (x0 ) represents the slope of the tangent on the graph for f at point x0 . In particular, the sign of the derivative characterises the type of variation of the function: a positive (resp. negative) derivative has a corresponding increasing (resp. decreasing) function. The derivative therefore measures the speed at which the function increases (resp. decreases) in the neighbourhood of a point. The derivative of the derivative, termed the second derivative and written f , will therefore be positive when the function f is increasing, that is, when the slope of the tangent on the graph for f increases when the variable x increases: the function is said to be convex. Conversely, a function with a negative second derivative is said to be concave (see Figure A1.2). 1.1.1.3 Calculations Finally, remember the elementary rules for calculating derivatives. Those relative to operations between functions ﬁrst of all: (f + g) = f + g (λf ) = λf 1 Readers wishing to ﬁnd out more about these concepts should read: Bair J., Math´ematiques g´en´erales, De Boeck, 1990. Esch L., Math´ematique pour e´ conomistes et gestionnaires, De Boeck, 1992. Guerrien B., Alg`ebre lin´eaire pour e´ conomistes, Economica, 1992. Ortega M., Matrix Theory, Plenum, 1987. Weber J. E., Mathematical Analysis (Business and Economic Applications), Harper and Row, 1982. 2 Also referred to as ﬁrst derivative.

326

Asset and Risk Management f(x) f(x0 + h)

B

C

θ

f(x0)

A

x0 + h

x0

x

Figure A1.1 Geometric interpretation of derivative f (x)

f(x)

x

x

Figure A1.2 Convex and concave functions

(fg) = f g + fg f g − fg f = g g2 Next, those relating to compound functions: [g(f )] = g (f ) · f Finally, the formulae that give the derivatives for a few elementary functions: (x m ) = mx m−1 (ex ) = ex (a x ) = a x ln a 1 (ln x) = x 1 (loga x) = x ln a 1.1.1.4 Extrema The point x0 is a local maximum (resp. minimum) of the function f if f (x0 ) ≥ f (x) for any x close to x0 .

(resp. f (x0 ) ≤ f (x))

Mathematical Concepts

327

The extrema within an open interval for a derivable function can be determined thanks to two conditions. • The ﬁrst-order (necessary) condition states that if x0 is an extremum of f , then f (x0 ) = 0. At this point, called the stationary point, the tangent at the graph of f is therefore horizontal. • The second-order (sufﬁcient) condition allows the stationary points to be ‘sorted’ according to their nature. If x0 is a stationary point of f and f (x0 ) > 0, we then have a minimum; in the opposite situation, if f (x0 ) < 0, we have a maximum. 1.1.2 Taylor’s formula Consider a function f that one wishes to study in the neighbourhood of x0 (let us say, at x0 + h). One method will be to replace this function by a polynomial – a function that is easily handled – of the variable h: f (x0 + h) = a0 + a1 h + a2 h2 + · · · For the function f to be represented through the polynomial, both must: • • • •

take the same value at h = 0; have the same slope (that is, the same ﬁrst derivative) at h = 0; have the same convexity or concavity (that is, the same second derivative) at h = 0; and so on.

Also, the number of conditions to be imposed must correspond to the number of coefﬁcients to be determined within the polynomial. It will be evident that these conditions lead to: f (x0 ) 0! f (x0 ) a1 = f (x0 ) = 1! f (x0 ) f (x0 ) = a2 = 2 2! ··· f (k) (x0 ) ak = k! ···

a0 = f (x0 ) =

Generally, therefore, we can write: f (x0 + h) =

f (x0 ) f (x0 ) f (n) (x0 ) n f (x0 ) 2 + h+ h +··· + h + Rn 0! 1! 2! n!

Here Rn , known as the expansion remainder, is the difference between the function f to be studied and the approximation polynomial. This remainder will be negligible under certain conditions of regularity as when h tends towards 0, it will tend towards 0 more quickly than hn .

328

Asset and Risk Management

The use of Taylor’s formula in this book does not need a high-degree polynomial, and we will therefore write more simply: f (x0 + h) ≈ f (x0 ) +

f (x0 ) f (x0 ) 2 f (x0 ) 3 h+ h + h +··· 1! 2! 3!

For some elementary functions, Taylor’s expansion takes a speciﬁc form that is worth remembering: x x2 x3 + + + ··· 1! 2! 3! m(m − 1) 2 m(m − 1)(m − 2) 3 m x + x + ··· (1 + x)m ≈ 1 + x + 1! 2! 3! x2 x3 ln(1 + x) ≈ x − + − ··· 2 3 ex ≈ 1 +

A speciﬁc case of power function expansion is the Newton binomial formula: (a + b)n =

n n k=0

k

a k bn−k

1.1.3 Geometric series If within the Taylor formula for (1 + x)m , x is replaced by (−x) and m by (−1), we will obtain: 1 ≈ 1 + x + x2 + x3 + · · · 1−x It is easy to demonstrate that when |x| < 1, the sequence 1 1+x 1 + x + x2 ··· 1 + x + x2 + · · · + xn ··· will converge towards the number 1/(1 − x). The limit of this sequence is therefore a sum comprising an inﬁnite number of terms and termed a series. What we are concerned with here is the geometric series: 1 + x + x2 + · · · + xn + · · · =

∞

xn =

n=0

1 1−x

A relation linked to this geometric series is the one that gives the sum of the terms in a geometric progression: the sequence t1 , t2 , t3 etc. is characterised by the relation tk = tk−1 · q (k = 2, 3, . . .)

Mathematical Concepts

329

the sum of t1 + t2 + t3 + · · · + tn is given by the relation: n

tk =

k=1

t1 − tn+1 1 − qn = t1 1−q 1−q

1.2 FUNCTIONS OF SEVERAL VARIABLES 1.2.1 Partial derivatives 1.2.1.1 Deﬁnition and graphical interpretation For a function f of n variables x1 , x2 , . . . , xn , the concept of derivative is deﬁned in a similar way, although the increase h can relate to any of the variables. We will therefore have n concepts of derivatives, relative to each of the n variables, and they will be termed partial derivatives. The partial derivative of f (x1 , x2 , . . . , xn ) with respect to xk at point (x1(0) , x2(0) , . . . , xn(0) ) will be deﬁned as: fxk (x1(0) , x2(0) , . . . , xn(0) ) f (x1(0) , x2(0) , . . . , xk(0) + h, . . . , xn(0) ) − f (x1(0) , x2(0) , . . . , xk(0) , . . . , xn(0) ) h→0 h

= lim

The geometric interpretation of the partial derivatives can only be envisaged for the functions of two variables as the graph for such a function will enter the ﬁeld of three dimensions (one dimension for each of the two variables and the third, the ordinate, for the values of the function). We will thus be examining the partial derivatives: f (x0 + h, y0 ) − f (x0 , y0 ) h f (x0 , y0 + h) − f (x0 , y0 ) fy (x0 , y0 ) = lim h→0 h

fx (x0 , y0 ) = lim

h→0

Let us now look at the graph for this function f (x, y). It is a three-dimensional space (see Figure A1.3). Let us also consider the vertical plane that passes through the point (x0 , y0 ) and parallel to the Ox axis. Its intersection with the graph for f is the curve Cx . The same reasoning as that adopted for the functions of one variable shows that the partial derivative fx (x0 , y0 ) is equal to the slope of the tangent to that curve Cx at the axis point (x0 , y0 ) (that is, the slope of the graph for f in the direction of x). In the same way, fy (x0 , y0 ) represents the slope of the tangent to Cy at the axis point (x0 , y0 ). 1.2.1.2 Extrema without constraint The point (x1(0) , . . . , xn(0) ) is a local maximum (resp. minimum) of the function f if f (x1(0) , . . . , xn(0) ) ≥ f (x1 , . . . , xn ) [resp. f (x1(0) , . . . , xn(0) ) ≤ f (x1 , . . . , xn )] for any (x1 , . . . , xn ) close to (x1(0) , . . . , xn(0) ).

330

Asset and Risk Management f(x,y)

Cx Cy

x0

y0 y

x (x0,y0)

Figure A1.3 Geometric interpretation of partial derivatives

As for the functions of a single variable, the extrema of a derivable function can be determined thanks to two conditions. • The ﬁrst-order (necessary) condition states that if x (0) is an extremum of f , then all the partial derivatives of f will be zero in x (0) : fxi (x (0) ) = 0

(i = 1, . . . , n)

When referring to the geometric interpretation of the partial derivatives of a function of two variables, at this type of point (x0 , y0 ), called the stationary point, the tangents to the curves Cx and Cy are therefore horizontal. • The second-order (sufﬁcient) condition allows the stationary points to be ‘sorted’ according to their nature, but ﬁrst and foremost requires deﬁnition of the Hessian matrix of the function f at point x, made up of second partial derivatives of f :

fx1 x1 (x)

fx x (x) 2 1 H (f (x1 , . . . , xn )) = .. . fxn x1 (x)

fx1 x2 (x)

···

fx2 x2 (x) .. . fxn x2 (x)

··· ···

fx1 xn (x)

fx2 xn (x) .. . fxn xn (x)

If x (0) is a stationary point of f and H (f (x)) is p.d. at x (0) or s.p. in a neighbourhood of x (0) , we have a minimum. In the opposite situation, if H (f (x)) is n.d. at x (0) or s.n. in a neighbourhood of x (0) , we have a maximum.3 1.2.1.3 Extrema under constraint(s) This is a similar concept, but one in which the analysis of the problem of extrema is restricted to those x values that obey one or more constraints. 3

These notions are explained in Section 1.3.2.1 in this Appendix.

Mathematical Concepts

331

The point (x1(0) , . . . , xn(0) ) is a local maximum (resp. minimum) of the function f under the constraints g1 (x) = 0 ... gr (x) = 0 If x (0) veriﬁes the constraints itself and f (x1(0) , . . . , xn(0) ) ≥ f (x1 , . . . , xn )

[resp. f (x1(0) , . . . , xn(0) ) ≤ f (x1 , . . . , xn )]

for any (x1 , . . . , xn )

in a neighbourhood of (x1(0) , · · · , xn(0) ) satisfying the r constraints

Solving this problem involves considering the Lagrangian function of the problem. We are looking at a function of the (n + r) variables (x1 , . . . , xn ; m1 , . . . , mr ), the latest r values – known as Lagrangian multipliers – each correspond to a constraint: L(x1 , . . . , xn ; m1 , . . . , mr ) = f (x) + m1 · g1 (x) + · · · + mr · gr (x) We will not go into the technical details of solving this problem. We will, however, point out an essential result: if the point (x (0) ; m(0) ) is such that x (0) veriﬁes the constraints and (x (0) ; m(0) ) is a extremum (without constraint) of the Lagrangian function, then x (0) is an extremum for the problem of extrema under constraints. 1.2.2 Taylor’s formula Taylor’s formula is also generalised for the n-variable functions, but the degree 1 term, which reveals the ﬁrst derivative, is replaced by n terms with the n partial derivatives: fxi (x1(0) , x2(0) , . . . , xn(0) )

i = 1, 2, . . . , n

In the same way, the degree 2 term, the coefﬁcient of which constitutes the second derivative, here becomes a set of n2 terms in which the various second partial derivatives are involved: fxi xj (x1(0) , x2(0) , . . . , xn(0) ) i, j = 1, 2, . . . , n Thus, by limiting the writing to the degree 2 terms, Taylor’s formula is written as follows: f (x1(0) + h1 , x2(0) + h2 , . . . , xn(0) + hn ) ≈ f (x (0) ) + n

+

n

1 (0) f (x )hi 1! i=1 xi n

1 f (x (0) )hi hj + · · · 2! i=1 j =1 xi xj

332

Asset and Risk Management

1.3 MATRIX CALCULUS 1.3.1 Deﬁnitions 1.3.1.1 Matrices and vectors The term n-order matrix is given to a set of n2 real numbers making up a square table consisting of n rows and n columns.4 A matrix is generally represented by a capital letter (such as A), and its elements by the corresponding lower-case letter (a) with two allocated indices representing the row and column to which the element belongs: aij is the element of matrix A located at the intersection of row i and column j within A. Matrix A can therefore be written generally as follows:

a11 a21 .. . A= ai1 . ..

an1

a12 a22 .. . ai2 .. . an2

··· ··· ··· ···

a1j a2j .. . aij .. . anj

a1n a2n .. . · · · ain .. . · · · ann ··· ···

In the same way, a vector of n dimension is a set of n real numbers forming a columnar table. The elements in a vector are its components and are referred to by a single index.

x1 x2 .. = . X xi . .. xn 1.3.1.2 Speciﬁc matrices The diagonal elements in a matrix are the elements a11 , a22 , . . . , ann . They are located on the diagonal of the table that starts from the upper left-hand corner; this is known as the principal diagonal. A matrix is deﬁned as symmetrical if the elements symmetrical with respect to the principal diagonal are equal: aij = aji . Here is an example:

2 −3 0 √ 1 2 A = −3 √ 2 0 0 4 More generally, a matrix is a rectangular table with the format (m, n); m rows and n columns. We will, however, only be looking at square matrices here.

Mathematical Concepts

333

An upper triangular matrix is a matrix in which the elements located underneath the principal diagonal are zero: aij = 0 when i < j . For example: 0 2 −1 0 A = 0 3 0 0 5 The concept of a lower triangular matrix is of course deﬁned in a similar way. Finally, a diagonal matrix is one that is both upper triangular and lower triangular. Its only non-zero elements are the diagonal elements: aji = 0 when i and j are different. Generally, this type of matrix will be represented by: a1 0 · · · 0 0 a2 · · · 0 A= . .. . . .. = diag (a1 , a2 , . . . , an ) .. . . . 0 0 · · · an 1.3.1.3 Operations The sum of two matrices, as well as the multiplication of a matrix by a scalar, are completely natural operations: the operation in question is carried out for each of the elements. Thus: (A + B)ij = aij + bij (λA)ij = λaij These deﬁnitions are also valid for the vectors: + Y )i = xi + yi (X i = λxi (λX) The product of two matrices A and B is a matrix of the same order as A and B, in which the element (i, j ) is obtained by calculating the sum of the products of the elements in line i of A with the corresponding elements in column j in B: (AB)ij = ai1 b1j + ai2 b2j + · · · + ain bnj =

n

aik bkj

k=1

We will have, for example: −2 10 −3 0 5 −2 2 0 −1 3 −2 17 −7 0 = −4 1 · 3 −1 6 −17 6 2 0 −1 −3 2 0 Despite the apparently complex deﬁnition, the matrix product has a number of classical properties; it is associative and distributive with respect to addition. However, it needs to be handled with care as it lacks one of the classical properties: it is not commutative. AB does not equal BA!

334

Asset and Risk Management

The product of a matrix by a vector is deﬁned using the same “lines by columns” procedure: n i= aik xk (AX) k=1

The transposition of a matrix A is the matrix At , obtained by permuting the symmetrical elements with respect to the principal diagonal, or, which amounts to the same thing, by permuting the role of the lines and columns in matrix A: (At )ij = aji A matrix is thus symmetrical if, and only if, it is equal to its transposition. In addition this operation, applied to a vector, gives the corresponding line vector as its result. The inverse of matrix A is matrix A−1 , if it exists, so that: AA−1 = A−1 A = diag(1, . . . , 1) = I . For example, it is easy to verify that: −1 3 1 −1 1 0 1 −2 1 −3 = 0 0 1 −2 −1 1 0 1 0 Finally, let us deﬁne the trace of a matrix. The trace is the sum of the matrix’s diagonal elements: n tr(A) = a11 + a22 + · · · + ann = aii i=1

1.3.2 Quadratic forms 1.3.2.1 Quadratic form and class of symmetrical matrix A quadratic form is a polynomial function with n variables containing only seconddegree terms: n n aij xi xj Q(x1 , x2 , . . . , xn ) = i=1 j =1

If we construct a matrix A from coefﬁcients aij (i, j = 1, . . . , n) and the vector X of the variables xi (i = 1, . . . , n), we can give a matrix expression to the quadratic form: =X t AX. Q(X) In fact, by developing the straight-line member, we produce: t AX = X

n

i xi (AX)

i=1

=

n i=1

=

xi

n

aij xj

j =1

n n i=1 j =1

aij xi xj

Mathematical Concepts

335

A quadratic form can always be associated with a matrix A, and vice versa. The matrix, 2 however, is not unique. Infact, the quadratic form Q(x1 , x 2 ) = 3x1 − 4x 1 x2 canbe associ 3 −6 3 0 3 −2 , as well B= C= ated with matrices A = −4 0 2 0 −2 0 as inﬁnite number of others. Amongst all these matrices, only one is symmetrical (A in the example given). There is therefore bijection between all the quadratic forms and all the symmetrical matrices. The class of a symmetrical matrix is deﬁned on the basis of the sign of the associated t AX > quadratic form. Thus, the non-zero matrix A is said to be positive deﬁnite (p.d.) if X 0 for any X not equal to 0, and semi-positive (s.p.) when: t AX ≥ 0 for any X = 0 X there is one Y = 0 so that Y t AY = 0 A matrix is negative deﬁnite (n.d.) and semi-negative (s.n.) by the inverse inequalities, and the term non-deﬁnite is given to a symmetrical matrix for which there are some X t t and Y = 0 so that X AX > 0 and Y AY < 0. 5 −3 −4 10 2 is thus p.d., as the associated quadratic The symmetrical matrix A = −3 −4 2 8 form can be written as: Q(x, y, z) = 5x 2 + 10y 2 + 8z2 − 6xy − 8xz + 4yz = (x − 3y)2 + (2x − 2z)2 + (y + 2z)2 This form will never be negative, and simply cancels out when: x − 3y = 0 2x − 2z = 0 y + 2z = 0 That is, when x = y = z = 0. 1.3.2.2 Linear equation system A system of n linear equations with n unknowns is a set of relations of the following type: a x + a12 x2 + · · · + a1n xn = b1 11 1 a21 x1 + a22 x2 + · · · + a2n xn = b2 ··· a x + a x + ··· + a x = b n1 1 n2 2 nn n n In it, the aij , xj and bi are respectively the coefﬁcients, the unknowns and the second members. They are written naturally in both matrix and vectorial form: A, X and B. Using this notation, the system is written in an equivalent but more condensed way: AX = B

336

Asset and Risk Management

For example, the system of equations

can also be written as:

2x + 3y = 4 4x − y = −2

2 3 4 −1

4 x = −2 y

If the inverse of matrix A exists, it can easily be seen that the system admits one and just one solution, given as X = A−1 X. 1.3.2.3 Case of variance–covariance matrix5 2 σ1 σ12 · · · σ1n σ21 σ22 · · · σ2n The matrix V = .. .. .. , for the variances and covariances of a number .. . . . . σn1 σn2 · · · σn2 of random variables X1 , X2 , . . . , Xn is a matrix that is either p.d. or s.p. In effect, regardless of what the numbers λ1 , λ2 , . . . , λn are, not all zero and making we have: up the vector , = V t

n n i=1 j =1

λi λj σij = var

n

λi Xi

≥0

i=1

It can even be said, according to this result, that the variance–covariance matrix V is p.d. except when there are coefﬁcients λ1 , λ2 , . . . , λn that are not all zero, so that the random variable λ1 X1 + · · · + λn Xn = ni=1 λi Xi is degenerate, in which case V will be s.p. This degeneration may occur, for example, when: • one of the variables is degenerate; • some variables are perfectly correlated; • the matrix V is obtained on the basis of observations of a number strictly lower than the number of variables. It will then be evident that the variance–covariance matrix can be expressed as a matrix, through the relation: − µ)( − µ) V = E[(X X t] 1.3.2.4 Choleski factorisation Consider a symmetrical matrix A positive deﬁnite. It can be demonstrated that there exists a lower triangular matrix L with strictly positive diagonal elements so that A = LLt . 5

The concepts necessary for an understanding of this example are shown in Appendix 2.

Mathematical Concepts

337

This factorisation process is known as a Choleski factorisation. We will not be demonstrating this property, but will show, using the previous example, how the matrix L is found: 2 a ab ad a b d a 0 0 bd + cf LLt = b c 0 0 c f = ab b2 + c2 2 0 0 g d f g ad bd + cf d + f 2 + g 2 5 −3 −4 10 2 = A = −3 −4 2 8 It is then sufﬁcient to work the last equality in order to ﬁnd a, b, c, d, f and g in succession, which will give the following for matrix L. √ 5 √ 3 5 L = − 5 4√5 − 5

√

0

205 5 √ 2 205 − 205

0

0 √ 14 41 41

Appendix 2 Probabilistic Concepts1 2.1 RANDOM VARIABLES 2.1.1 Random variables and probability law 2.1.1.1 Deﬁnitions Let us consider a fortuitous phenomenon, that is, a phenomenon that under given initial conditions corresponds to several possible outcomes. A numerical magnitude that depends on the observed result is known as a random variable or r.v. In addition, probabilities are associated with various possible results or events deﬁned in the context of the fortuitous phenomenon. It is therefore interesting to ﬁnd out the probabilities of the various events deﬁned on the basis of the r.v. What we are looking at here is the concept of law of probability of the r.v. Thus, if the r.v. is termed X, the law of probability of X is deﬁned by the range of the following probabilities: Pr[X ∈ A], for every subset A of R. The aim of the concept of probability law is a bold one: the subsets A of R are too numerous for all the probabilities to be known. For this reason, we are content to work with just the ]−∞; t] sets. This therefore deﬁnes a function of the variable t, the cumulative distribution function or simplier distribution function (d.f.) of the random variable F (t) = Pr[X ≤ t]. It can be demonstrated that this function, deﬁned in R, is increasing, that it is between 0 1 and 1, that it admits the ordinates 0 and 1 as horizontal asymptotics limt→±∞ F (t) = , 0 and that it is right-continuous: lims→t+ F (s) = F (t). These properties are summarised in Figure A2.1. In addition, despite its simplicity, the d.f. allows almost the whole of the probability law for X to be found, thus: Pr[s < X ≤ t] = F (t) − F (s) Pr[X = t] = F (t) − F (t−) 2.1.1.2 Quantile Sometimes there is a need to solve the opposite problem: being aware of a probability level u and determining the value of t so that F (t) = Pr[X ≤ t] = u. This value is known as the quantile of the r.v. X at point u and its deﬁnition are shown in Figure A2.2. 1 Readers wishing to ﬁnd out more about these concepts should read: Baxter M. and Rennie A., Financial Calculus, Cambridge University Press, 1996. Feller W., An Introduction to Probability Theory and its Applications (2 volumes), John Wiley and Sons, Inc., 1968. Grimmett G. and Stirzaker D., Probability and Random Processes, Oxford University Press, 1992. Roger P., Les outils de la mod´elisation ﬁnanci`ere, Presses Universitaires de France, 1991. Ross S. M., Initiation aux probabilit´es, Press Polytechniques et Universitaires Romandes, 1994.

340

Asset and Risk Management F(t) 1

t

0

Figure A2.1 Distribution function

F(t) 1

u

0

Q(u)

t

Figure A2.2 Quantile

F(t) 1

u 0

Q(u)

t

Figure A2.3 Quantile in jump scenario

In two cases, however, the deﬁnition that we have just given is unsuitable and needs to be adapted. First of all, if the d.f. of X shows a jump that covers the ordinate u, none of the abscissas will correspond to it and the abscissa of the jump, naturally, will be chosen (see Figure A2.3). Next, if the ordinate u corresponds to a plateau on the d.f. graph, there is an inﬁnite number of abscissas on the abscissa to choose from (see Figure A2.4). In this case, an abscissa deﬁned by the relation Q(u) = um + (1 − u)M can be chosen. The quantile function thus deﬁned generalises the concept of the reciprocal function of the d.f. 2.1.1.3 Discrete random variable A discrete random variable corresponds to a situation in which the set of possible values for the variable is ﬁnite or inﬁnite countable. In this case, if the various possible values

Probabilistic Concepts

341

F(t) 1

u

m Q(u) M

0

Figure A2.4 Quantile in plateau scenario

and corresponding probabilities are known x1 x2 · · · p1 p2 · · · Pr[X = xi ] = pi pi = 1

xn pn

··· ···

i = 1, 2, . . . , n, . . .

i

The law of probability of X can be easily determined: Pr[X ∈ A] = pi {i:xi ∈A}

The d.f. of a discrete r.v. is a stepped function, as the abscissas of jumps correspond to the various possible values of X and the heights of the jumps are equal to the associated probabilities (see Figure A2.5). In particular, a r.v. is deﬁned as degenerate if it can only take on one value x (also referred to as a certain variable): Pr[X = x] = 1. The d.f. for a degenerate variable will be 0 to the left of x and 1 from x onwards. 2.1.1.4 Continuous random variable In contrast to the discrete r.v., the set of possible values for a r.v. could be continuous (an interval, for example) with no individual value having a strictly positive probability: Pr[X = x] = 0

∀x

F(t) 1

p4 0

x4

Figure A2.5 Distribution function for a discrete random variable

342

Asset and Risk Management f

x x+h

Figure A2.6 Probability density

In this case, the distribution of probabilities over the set of possible values is expressed using a density function f : for a sufﬁciently small h, we will have Pr[x < X ≤ x + h] ≈ hf (x). This deﬁnition is shown in Figure A2.6. The law of probability is obtained from the density through the following relation: Pr[X ∈ A] = f (x) dx A

And as a particular case:

F (t) =

t −∞

f (x) dx

2.1.1.5 Multivariate random variables Often there is a need to consider several r.v.s simultaneously X1 , X2 , . . . , Xm , associated with the same fortuitous phenomenon.2 Here, we will simply show the theory for a bivariate random variable, that is, a pair of r.v.s (X, Y ); the general process for a multivariate random variable can easily be deduced from this. The law of probability for a bivariate random variable is deﬁned as the set of the following probabilities: Pr [(X, Y ) ∈ A], for every subset A of R2 . The joint distribution function is deﬁned F (s, t) = Pr([X ≤ s] ∩ [Y ≤ t]) and the discrete and continuous bivariate random variables are deﬁned respectively by: pij = Pr([X = xi ] ∩ [Y = yj ]) f (x, y) dx dy Pr[(X, Y ) ∈ A] = A

Two r.v.s are deﬁned as independent when they are not inﬂuenced either from the point of view of possible values or through the probability of the events that they deﬁne. More formally, X and Y are independent when: Pr([X ∈ A] ∩ [Y ∈ B]) = Pr[X ∈ A] · Pr[Y ∈ B] for every A and B in R. 2

For example, the return on various ﬁnancial assets.

Probabilistic Concepts

343

It can be shown that two r.v.s are independent if, and only if, their joint d.f. is equal to the product of the d.f.s of each of the r.v.s: F (s, t) = FX (s) · FY (t). And that this condition, for discrete or continuous random variables, shows as: pij = Pr[X = xi ] · Pr[Y = yj ] f (x, y) = fX (x) · fY (y) 2.1.2 Typical values of random variables The aim of the typical values of a r.v. is to summarise the information contained in its probability law in a number of representative parameters: parameters of location, dispersion, skewness and kurtosis. We will be looking at one from each group. 2.1.2.1 Mean The mean is a central value that locates a r.v. by dividing the d.f. into two parts with the same area (see Figure A2.7). The mean µ of the r.v. X is therefore such that: µ +∞ F (t) dt = [1 − F (t)] dt −∞

µ

The mean of a r.v. can be calculated on the basis of the d.f.: 0 +∞ [1 − F (t)] dt − F (t) dt µ= −∞

0

the formula reducing for a positive r.v. as follows: +∞ µ= [1 − F (t)] dt 0

It is possible to demonstrate that for a discrete r.v. and a continuous r.v., we have the formulae: xi pi µ= µ=

Figure A2.7 Mean of a random variable

i +∞

−∞

xf (x) dx

344

Asset and Risk Management

The structure of these two formulae shows that µ integrates the various possible values for the r.v. X by weighting them through the probabilities associated with these values. It can be shown3 that these formulae generalised into an abstract integral of X(ω) with respect to the measure of probability Pr in the set of the possible outcomes ω of the fortuitous phenomenon. This integral is known as the expectation of the r.v. X: E(X) =

X(ω)d Pr(ω)

According to the foregoing, there is equivalence between the concepts of expectation and mean (E(X) = µ) and we will interchange both these terms from now on. The properties of the integral show that the expectation is a linear operator: E(aX + bY + c) = aE(X) + bE(Y ) + c And that if X and Y are independent, them E(XY ) = E(X) · E(Y ). In addition, for a discrete r.v. or a continuous r.v., the expectation of a function of a r.v. variable is given by: E(g(X)) =

E(g(X)) =

g(xi )pi

i +∞ −∞

g(x)f (x) dx

Let us remember ﬁnally the law of large numbers,4 which for a sequence of independent r.v.s X1 , X2 , . . . , Xn with identical distribution and a mean µ, expresses that regardless of what ε > 0 may be X1 + X2 + · · · + Xn lim Pr − µ ≤ ε = 1 n→∞ n This law justiﬁes taking the average of a sample to estimate the mean of the population and in particular estimating the probability of an event through the frequency of that event’s occurrence when a large number of realisations of the fortuitous phenomenon occur. 2.1.2.2 Variance and standard deviation One of the most commonly used dispersion indices (that is, a measurement of the spread of the r.v.s values around its mean) is the variance σ 2 , deﬁned as: σ 2 = var(X) = E[(X − µ)2 ] 3 This development is part of measure theory, which is outside the scope of this work. Readers are referred to Loeve M., Probability Theory (2 volumes), Springer-Verlag, 1977. 4 We are showing this law in its weak form here.

Probabilistic Concepts

345

f

x

Figure A2.8 Variance of a random variable

By developing the right member, we can therefore arrive at the variance σ 2 = E(X2 ) − µ2 For a discrete r.v. and a continuous r.v., this will give: (xi − µ)2 pi = xi 2 pi − µ2 σ2 = σ2 =

i

i +∞

−∞

(x − µ)2 f (x) dx =

+∞

−∞

x 2 f (x) dx − µ2

An example of the interpretation of this parameter is found in Figure A2.8. It can be demonstrated that var(aX + b) = a 2 var(X). And that if X and Y are independent, then var(X + Y ) = var(X) + var(Y ). Alongside the variance, the dimension of which is the square of the dimension of X, we can also use the standard deviation, which is simply the square root: σ = var(X) 2.1.2.3 Fisher’s skewness and kurtosis coefﬁcients Fisher’s skewness coefﬁcient is deﬁned by: γ1 =

E[(X − µ)3 ] σ3

It is interpreted essentially on the basis of its sign: if γ1 > 0 (resp. 0) exp − 2 σ 2πσ x The graph for this density is shown in Figure A2.12 and its typical values are given by: E(X) = eµ+

σ2 2

var(X) = e2µ+σ (eσ − 1) 2 γ1 (X) = (eσ + 2) eσ 2 − 1 2

2

γ2 (X) = (e3σ + 3e2σ + 6eσ + 6)(eσ − 1) 2

2

2

2

This conﬁrms the skewness with concentration to the left and the spreading to the right, observed on the graph.

350

Asset and Risk Management f(x)

x

Figure A2.12 Log-normal distribution

We would point out ﬁnally that a result of the same type as the central limit theorem also leads to the log-normal law: this is the case in which the effects represented by the various r.v.s accumulate through a multiplication model rather than through an addition model, because of the fundamental property of the logarithms: ln(x1 · x2 ) = ln x1 + ln x2 .

2.2.2 Other theoretical distributions 2.2.2.1 Poisson distribution The Poisson r.v., with parameter µ, is a discrete X r.v. that takes all the complete positive integer values 0, 1, 2 etc. with the associated probabilities of: Pr[X = k] = e−µ

µk k!

k∈N

The typical values for this distribution are given by: E(X) = µ var(X) = µ

2.2.2.2 Binomial distribution The Bernoulli scheme is a probability model applied to a very wide range of situations. It is characterised by • a ﬁnite number of independent trials; • during each trial, two results only – success and failure – are possible; • also during each trial, the probability of a success occurring is the same. If n is the number of trials and p the probability of each success succeeding, the term used is Bernoulli scheme with parameters (n; p) and the number of successes out of the

Probabilistic Concepts

351

n tests is a binomial parameter r.v., termed B(n, p). This discrete random variable takes the values 0, 1, 2, . . . , n with the following associated probabilities:5 n p k (1 − p)n−k Pr[B(n; p) = k] = k ∈ {0, 1, . . . , n} k The sum of these probabilities equals 1, in accordance with Newton’s binomial formula. In addition, the typical values for this distribution are given by: E(B(n; p)) = np var(B(n; p)) = np(1 − p) The binomial distribution allows two interesting approximations when the n parameter is large. Thus, for a very small p, we have the approximation through Poisson’s law with np parameter: (np)k Pr[B(n; p) = k] ≈ e−np k! For a p that is not √ to close to 0 or 1, the binomial r.v. tends towards a normal law with parameters (np; np(1 − p)), and more speciﬁcally: k − µ − 12 k − µ + 12 − Pr[B(n; p) = k] ≈ σ σ 2.2.2.3 Student distribution The Student distribution, with n degrees of freedom, is deﬁned by the density −(ν+1)/2 ) ( ν+1 x2 2 1+ f (x) = √ ν ( ν2 ) νπ +∞ In this expression, the gamma function is deﬁned by (n) = 0 e−x x n−1 dx. This generalises the factorial function as (n) = (n − 1) · (n − 1) and for integer n, we have: (n) = (n − 1)! This is, however, deﬁned for n values that are not integer: all the positive real values of n and, for example: √ ( 12 ) = π We are not representing the graph for this density here, as it is symmetrical with respect to the vertical axis and bears a strong resemblance to the standard normal density graph, although for ν > 4 the kurtosis coefﬁcient value is strictly positive: E(X) = 0

ν ν −2 γ1 (X) = 0 6 γ2 (X) = ν −4

var(X) =

5

Remember that

p! n = k p!(n − p)!

352

Asset and Risk Management

Finally, it can be stated that when the number of degrees of freedom tends towards inﬁnity, the Student distribution tends towards the standard normal distribution, this asymptotic property being veriﬁed in practice as soon as ν reaches the value of 30. 2.2.2.4 Uniform distribution A r.v. is said to be uniform in the interval [a; b] when the probability of its taking a value between t and t + h6 depends only on these two boundaries through h. It is easy to establish, on that basis, that we are looking at a r.v. that only takes a value within the interval [a; b] and that its density is necessarily constant: f (x) = 1/(b − a) (a < x < b) Its graph is shown in Figure A2.13. The principal typical values for the uniform r.v. are given by: a+b 2 (a − b)2 var(X) = 12 γ1 (X) = 0 6 γ2 (X) = − 5 E(X) =

This uniform distribution is the origin of some simulation methods, in which the generation of random numbers distributed uniformly in the interval [0; 1] allows distributed random numbers to be obtained according to a given law of probability (Figure A2.14). The way in which this transformation occurs is explained in Section 7.3.1. Let us examine here how the (pseudo-) random numbers uniformly distributed in [0; 1] can be obtained. The sequence x1 , x2 , . . . , xn is constructed according to residue classes. On the basis of an initial value of ρ0 (equal to 1, for example), we can construct for i = 1, 2, . . . , n etc.: xi = decimal part of (c1 ρi−1 ) ρi = c2 xi Here, the constants c1 and c2 are suitably chosen, Thus, for c1 = 13.3669 and c2 = 94.3795, we ﬁnd successively as shown in Table A2.1: f(x)

1/(b – a)

a

Figure A2.13 Uniform distribution 6

These two values are assumed to belong to the interval [a; b].

b

x

Probabilistic Concepts

353

1

0

Figure A2.14 Random numbers uniformly distributed in [0; 1]

Table A2.1 i 0 1 2 3 4 5 6 7 8 9 10

xI and ρI xi

ρi

0.366900 0.866885 0.580898 0.453995 0.742910 0.226992 0.364227 0.494233 0.505097 0.210265

34.627839 81.813352 55.768652 42.847849 70.115509 21.423384 34.375527 46.645452 47.670759 19.844676

2.2.2.5 Generalised error distribution The generalised distribution of errors for parameter ν is deﬁned by the density √

f (x) =

3ν 3/2 1 2 ν

ν |x| exp − ν 1 . 3 ν

The graph for this density is shown in Figure A2.15. This is a distribution symmetrical with respect to 0, which corresponds to a normal distribution for n = 2 and gives rise to a leptokurtic distribution (resp. negative kurtosis distribution) for n < 2 (n > 2).

2.3 STOCHASTIC PROCESSES 2.3.1 General considerations The term stochastic process is applied to a random variable that is a function of the time variable: {Xt : t ∈ T }.

354

Asset and Risk Management f (x)

v=1 v=2 v=3

x

0

Figure A2.15 Generalised error distribution

If the set T of times is discrete, the stochastic process is simply a sequence of random variables. However, in a number of ﬁnancial applications such as Black and Scholes’ model, it will be necessary to consider stochastic processes in continuous time. For each possible result ω ∈ , the function of Xt (ω) of the variable t is known as the path of the stochastic process. A stochastic process is said to have independent increments when, regardless of the times t1 < t2 < . . . < tn , the r.v.s Xt1 , Xt2 − Xt1 , Xt3 − Xt2 , . . . are independent. In the same way, a stochastic process is said to have stationary increments when for every t and h the r.v.s Xt+h − Xt and Xh are identically distributed. 2.3.2 Particular stochastic processes 2.3.2.1 The Poisson process We consider a process of random occurrences of an event in time, corresponding to the set [0; +∞[. Here, the principal interest does not correspond directly to the occurrence times, but to the number of occurrences within given intervals. The r.v. that represents the number of occurrences within the interval [t1 , t2 ] is termed n(t1 , t2 ). This process is called a Poisson process if it obeys the following hypotheses: • the numbers of occurrences in separate intervals of time are independent; • the distribution of the number of occurrences within an interval of time only depends on that interval through its duration: Pr[n(t1 , t2 ) = k] is a function of (t2 − t1 ), which is henceforth termed pk (t2 − t1 ); • there is no multiple occurrence: if h is low, Pr[n(0; h) ≥ 2] = o(h); • there is a rate of occurrence α so that Pr[n(0; h) = 1] = αh + o(h). It can be demonstrated that under these hypotheses, the r.v. ‘number of occurrences within an interval of duration t’ is distributed according to a Poisson law for parameter αt: pk (t) = e−αt

(αt)k k!

k = 0, 1, 2, . . .

Probabilistic Concepts

355

To simplify, we note Xt = n(0; t). This is a stochastic process that counts the number of occurrences over time. The path for such a process is therefore a stepped function, with the abscissas for the jumps corresponding to the occurrence times and the heights of the jumps being equal to 1. It can be demonstrated that the process has independent and stationary increments and that E(Xt ) = var(Xt ) = αt. This process can be generalised as follows. We consider: • A Poisson process Xt as deﬁned above; with the time of the k th occurrence expressed as Tk , we have: Xt = #{k : Tk ≤ t}. • A sequence Y1 , Y2 , . . . of independent and identically distributed r.v.s, independent of the Poisson process. The process Zt = {k:Tk ≤t} Yk is known as a compound Poisson process. The paths of such a process are therefore stepped functions, with the abscissas for the jumps corresponding to the occurrence times for the subjacent Poisson process and the heights of the jumps being the realised values of the r.v.s Yk . In addition, we have: E(Zt ) = αt · µY var(Zt ) = αt · (σ 2 Y + µ2 Y ) 2.3.2.2 Standard Brownian motion Consider a sequence of r.v.s Xk , independent and identically distributed, with values (− X) and X with respective probabilities 1/2 and 1/2, and deﬁne the sequence of r.v.s as Yn through Yn = X1 + X2 + · · · + Xn . This is known as a symmetrical random walk. As E(Xk ) = 0 var(Xk ) = ( X)2 , we have E(Yn ) = 0 var(Yn ) = n( X)2 . For our modelling requirements, we separate the interval of time [0; t] in n subintervals of the same duration t = t/n and deﬁne Zt = Zt(n) = Yn . We have: E(Zt ) = 0

var(Yn ) = n( X)2 =

( X)2 t. t

This variable Zt allows the discrete development of a magnitude to be modelled. If we then wish to move to continuous modelling while retaining the same variability per ( X)2 unit of time, that is, with: = 1, for example, we obtain the stochastic process t (n) wt = limn→∞ Zt . This is a standard Brownian motion (also known as a Wiener process). It is clear that this stochastic process wt , deﬁned on R+ , is such that w0 = 0, that wt has independent and stationary increments, and that in view of the √ central limit theorem wt is distributed according to a normal law with parameters (0; t). It can be shown that the paths of a Wiener process are continuous everywhere, but cannot generally be differentiated. In fact √ wt ε t ε = =√ t t t where, ε is a standard normal r.v.

356

Asset and Risk Management

2.3.2.3 Itoˆ process If a more developed model is required, wt can be multiplied by a constant in order to produce variability per time unit ( X)2 / t different from 1 or to add a constant to it in order to obtain a non-zero mean: Xt = X0 + b · wt This type of model is not greatly effective because of the great variability √ of the development in the short term, the standard deviation of Xt being equal7 to b t. For this reason, this type of construction is applied more to variations relating to a short interval of time: dXt = a · dt + b · dwt It is possible to generalise by replacing the constants a and b by functions of t and Xt : dXt = at (Xt ) · dt + bt (Xt ) · dwt This type of process is known as the Itˆo process. In ﬁnancial modelling, several speciﬁc cases of Itˆo process are used, and a geometric Brownian motion is therefore obtained when: at (Xt ) = a · Xt

bt (Xt ) = b · Xt

An Ornstein–Uhlenbeck process corresponds to: at (Xt ) = a · (c − Xt )

bt (Xt ) = b

and the square root process is such that: √ bt (Xt ) = b Xt

at (Xt ) = a · (c − Xt ) 2.3.3 Stochastic differential equations

Expressions of the type dXt = at (Xt ) · dt + bt (Xt ) · dwt cannot simply be handled in the same way as the corresponding determinist expressions, because wt cannot be derived. It is, however, possible to extend the deﬁnition to a concept of stochastic differential, through the theory of stochastic integral calculus.8 As the stochastic process zt is deﬁned within the interval [a; b], the stochastic integral of zt is deﬁned within [a; b] with respect to the standard Brownian motion wt by: a

7 8

b

zt dwt = lim n→∞ δ→0

n−1

ztk (wtk+1 − wtk )

k=0

The root function presents a vertical tangent at the origin. The full development of this theory is outside the scope of this work.

Probabilistic Concepts

where, we have:

357

a = t0 < t1 < . . . < tn = b δ = max (tk − tk−1 ) k=1,...,n

Let us now consider a stochastic process Zt (for which we wish to deﬁne the stochastic differential)and a standard Brownian motion wt . If there is a stochastic process zt such that t Zt = Z0 + 0 zs dws , then it is said that Zt admits the stochastic differential dZt = zt dwt . This differential is interpreted as follows: the stochastic differential dZt represents the variation (for a very short period of time dt) of Zt , triggered by a random variation dwt weighted by zt , which represents the volatility of Zt at the moment t. More generally, the deﬁnition of dXt = at (Xt ) · dt + bt (Xt ) · dwt is given by Xt2 − Xt1 =

t2 t1

at (Xt ) dt +

t2 t1

bt (Xt ) dwt

The stochastic differential has some of the properties of ordinary differentials, such as linearity. Not all of them, however, remain true. For example,9 the stochastic differential of a product of two stochastic processes for which the stochastic differential of the factors is known, dXt(i) = at(i) dt + bt(i) dwt i = 1, 2 is given by:

d(Xt(1) Xt(2) ) = Xt(1) dXt(2) + Xt(2) dXt(1) + bt(1) bt(2) dt

Another property, which corresponds to the derivation formula for a compound function, is the well-known Itoˆ formula.10 This formula gives the differential for a two-variable function: a stochastic process for which the stochastic differential is known, and time. If the process Xt has the stochastic differential dXt = at dt + bt dwt and if f (x, t) is a C2 -class function, the process f (X, t) will admit the following stochastic differential: 1 2 df (Xt , t) = ft (Xt , t) + fx (Xt , t)at + fxx (Xt , t)bt · dt + fx (Xt , t)bt · dwt 2

9

We will from now on leave out the argument X in the expression of s functions a and b. Also known as the Itˆo’s lemma.

10

Appendix 3 Statistical Concepts1 3.1 INFERENTIAL STATISTICS 3.1.1 Sampling 3.1.1.1 Principles In inferential statistics, we are usually interested in a population and the variables measured on the individual members of that population. Unfortunately, the population as a whole is often far too large, and sometimes not sufﬁciently well known, to be handled directly. For cases of observed information, therefore, we must conﬁne ourselves to a subset of the population, known as a sample. Then, on the basis of observations made in relation to that sample, we attempt to deduce (infer) conclusions in relation to the population. The operation that consists of extracting the sample from the population is known as sampling. It is here that probability theory becomes involved, constituting the link between the population and the sample. It is deﬁned as simply random when the individual members are extracted independently from the population and all have the same probability of being chosen. In practice, this is not necessarily the case and the procedures set up for carrying out the sampling process must imitate the chance as closely as possible. 3.1.1.2 Sampling distribution Suppose that we are interested in a parameter θ of the population. If we extract a sample x1 , x2 , . . . , xn from the population, we can calculate the parameter θ for this sample θ (x1 , x2 , . . . , xn ). As the sampling is at the origin of the fortuitous aspect of this procedure, for another sample x1 , x2 , . . . , xn , we would have obtained another parameter value θ (x1 , x2 , . . . , xn ). We are therefore constructing a r.v. , in which the various possible values are the results of the calculation of θ for all the possible samples. The law of probability for this r.v. is known as the sampling distribution. In order to illustrate this concept, let us consider the sampling distribution for the mean of the population and suppose that the variable considered has a mean and variance µ and σ 2 respectively. On the basis of the various samples, it is possible to calculate an average on each occasion: n

1 x= xi n i=1

n

1 x = x n i=1 i

···

1 Readers interested in ﬁnding out more about the concepts developed below should read: Ansion G., Econom´etrie pour l’enterprise, Eyrolles, 1988. Dagnelie P., Th´eorie et m´ethodes statistique, (2 volumes), Presses Agronomiques de Gembloux, 1975. Johnston J., Econometric Methods, McGraw-Hill, 1972. Justens D., Statistique pour d´ecideurs, De Boeck, 1988. Kendall M. and Stuart A., The Advanced Theory of Statistics (3 volumes), Grifﬁn, 1977.

360

Asset and Risk Management

We thus deﬁne a r.v. X for which it can be demonstrated that: E(X) = µ var(X) =

σ2 n

The ﬁrst of these two relations justiﬁes the choice of the average of the sample as an estimator for the mean of the population. It is referred to as an unbiased estimator. Note If we examine in a similar way thesampling distribution for the variance, calculated on the basis of a sample using s 2 = n1 ni=1 (xi − x)2 , the associated r.v. S 2 will be such that n−1 2 σ . E(S 2 ) = n We are no longer looking at an unbiased estimator, but an asymptotically unbiased estimator (for n tending towards inﬁnity). For this reason, we frequently choose the 1 n (xi − x)2 . following expression as an estimator for the variance: n − 1 i=1 3.1.2 Two problems of inferential statistics 3.1.2.1 Estimation If the problem is therefore one of estimating a parameter θ of the population, we must construct an estimator that is a function of the values observed through the sampling procedure. It is therefore important for this estimator to be of good quality for evaluating the parameter θ . We thus often require an unbiased estimator: E() = θ . Nevertheless, of all the unbiased estimators, we want the estimator adopted to have other properties, and most notably its dispersion around the central value θ to be as small as possible. Its variance var() = E(( − θ )2 ) must be minimal.2 Alongside this selective estimation (there is only one estimation for a sample), a precision is generally calculated for the estimation by determining an interval [1 ; 2 ] centred on the value that contains the true value of the parameter θ to be estimated with a given probability: Pr[1 ≤ θ ≤ 2 ] = 1 − α with α = 0.05, for example. This interval is termed the conﬁdence interval for θ and the number (1 − α) is the conﬁdence coefﬁcient. This estimation by conﬁdence interval is only possible if one knows the sampling distribution for θ , for example because the population obeys this or that known distribution or if certain asymptotic results, such as central limit theorem, can be applied to it. Let us examine, by way of an example, the estimate of the mean of a normal population through conﬁdence interval. It is already known that the ‘best’ estimator is the average σ of sampling, which is distributed following a normal law with parameters µ; √ and n 2

For example, the sample average is the unbiased estimator for the minimal variance for the average of the population.

Statistical Concepts

361

X−µ √ is thus standard normal. If the quantile for this last distribution is termed σ/ n Q(u), we have: α X − µ α =1−α ≤ Pr Q √ ≤Q 1− 2 2 σ/ n α

α σ σ ≤µ≤X− √ Q =1−α Pr X − √ Q 1 − 2 2 n n

α α σ σ ≤µ≤X+ √ Q 1− =1−α Pr X − √ Q 1 − 2 2 n n the r.v.

This last equality makes up the conﬁdence interval formula for the mean; it can also be written more concisely as: α σ (s.p.α) I.C.(µ) : X ± √ Q 1 − 2 n We indicate that in this last formula, the standard deviation for the population σ is generally not known. If it is replaced by its estimator calculated on the basis of the sample, the quantile for the distribution must be replaced by the quantile relative to the Student distribution at (n − 1) degrees of freedom. 3.1.2.2 Hypothesis test The aim of a hypothesis test is to conﬁrm or refute a hypothesis formulated by a population, on the basis of a sample. In this way, we will know: • The goodness-of-ﬁt tests: verifying whether the population from which the sample is taken is distributed according to a given law of probability. • The independence tests between certain classiﬁcation criteria deﬁned on the population (these are also used for testing independence between r.v.s). • The compliance tests: verifying whether a population parameter is equal to a given value. • The homogeneity tests: verifying whether the values for a parameter measured on more than one population are the same (this requires one sample to be extracted per population). The procedure for carrying out a hypothesis test can be shown as follows. After deﬁning the hypothesis to be tested H0 , also known as the null hypothesis, and the alternative hypotheses H1 , we determine under H0 the sampling distribution for the parameter to be studied. With the ﬁxed conﬁdence coefﬁcient (1 − α), the sample is allocated to the region of acceptance (AH0 ) or to the region of rejection (RH0 ) within H0 . Four situations may therefore arise depending on the reality on one hand and the decision taken on the other hand (see Table A3.1). Zones (a) and (d) in Table A3.1 correspond to correct conclusions of the test. In zone (b) the hypothesis is rejected although it is true; this is a ﬁrst-type error for which the probability is the complementary α of the conﬁdence coefﬁcient ﬁxed beforehand. In zone

362

Asset and Risk Management Table A3.1

Hypothesis test conclusions Decision

reality

AH0

RH0

H0 H1

a c

b d

(c), the hypothesis is accepted although it is false; this is a second-type error for which the probability β is unknown. A good test will therefore have a small parameter β; the complementary (1 − β) of this probability is called the power of the test. By way of an example, we present the compliance test for the mean of a normal population. The hypothesis under test is, for example, H0 : µ = 1. The rival hypothesis is written as: H1 : µ = 1. X−1 Under H0 , the r.v. √ follows a normal law and the hypothesis being tested will σ/ n therefore be rejected when: X − 1 α √ >Q 1− 2 σ/ n

(s.p.α).

Again, the normal distribution quantile is replaced by the quantile for the Student distribution with (n − 1) degrees of freedom if the standard deviation for the population is replaced by the standard deviation for the sample.

3.2 REGRESSIONS 3.2.1 Simple regression Let us assume that a variable Y depends on another variable X through a linear relation Y = aX + b and that a series of observations is available for this pair of variables (X, Y ): (xt , yt ) t = 1, . . . , n. 3.2.1.1 Estimation of model If the observation pairs are represented on the (X, Y ) plane, it will be noticed that there are differences between them and a straight line (see Figure A3.1). These differences Y Y = aX + b yt

εt

axt + b

xt

Figure A3.1 Simple regression

X

Statistical Concepts

363

may arise, especially in the ﬁeld of economics, through failure to take account of certain explanatory factors of variable Y . It is therefore necessary to ﬁnd the straight line that passes as closely as possible to the point cloud, that is, the straight line for which εt = yt − (axt + b) are as small as possible overall. The criterion most frequently used is that of minimising the sum of the squares of these differences (referred to as the least square method ). The problem is therefore one of searching for the parameters a and b for which the expression n t=1

εt2 =

n

2 yt − (axt + b) t=1

is minimal. It can be easily shown that these parameters total: n

sxy aˆ = 2 = sx

t=1

(xt − x)(yt − y) n

(xt − x)2

t=1

bˆ = y − ax ˆ These are unbiased estimators of the real unknown parameters a and b. In addition, of all the unbiased estimators expressed linearly as a function of yt , they are the ones with the smallest variance.3 The straight line obtained using the procedure is known as the regression line. 3.2.1.2 Validation of model The signiﬁcantly explanatory character of the variable X in this model can be proved by testing the hypothesis H0 : a = 0. If we are led to reject the hypothesis, it is because X signiﬁcantly explains Y through the model, that is therefore validated. Because under certain probability hypotheses on the residuals εt the estimator for a is distributed according to a Student law with (n − 2) degrees of freedom, the hypothesis will be rejected (and the model therefore accepted) if aˆ (n−2) > t1−α/2 (s.p.α) sa where sa is the standard deviation for the estimator for a, measured on the observations. 3.2.2 Multiple regression The regression model that we have just presented can be generalised when several explanatory variables are involved at once: Y = α0 + α1 X1 + · · · + αk Xk . 3

They are referred to as BLUE (Best Linear Unbiased Estimators).

364

Asset and Risk Management

In this case, if the observations x and y and the parameters α are presented as matrices y1 α1 1 x11 · · · x1k y2 α2 1 x21 · · · x2k X=. .. .. Y = .. α = .. , .. . .. . . . . yn αn 1 xn1 · · · xnk it can be shown that the vector for the parameter estimations is given by αˆ = (Xt X)−1 (Xt Y ). In addition, the Student validation test shown for the simple regression also applies here. It is used to test the signiﬁcantly explanatory nature of a variable within the multiple model, the only alteration being the number of degrees of freedom, which passes from (n − 2) to (n − k − 1). We should mention that there are other tests for the overall validity of the multiple regression model. 3.2.3 Nonlinear regression It therefore turns out that the relation allowing Y to be explained by X1 , X2 , . . . , Xk is not linear: Y = f (X1 , X2 , . . . , Xk ). In this case, sometimes, the relation can be made linear by a simple analytical conversion. For example, Y = aXb is converted by a logarithmic transformation: ln Y = ln a + b ln X Y ∗ = a ∗ + bX∗ We are thus brought round to a linear regression model. Other models cannot be transformed quite so simply. Thus, Y = a + Xb is not equivalent to the linear model. In this case, much better developed techniques, generally of an iterative nature, must be used to estimate the parameters for this type of model.

Appendix 4 Extreme Value Theory 4.1 EXACT RESULT Let us consider a sequence of r.v.s X1 , X2 , . . . , Xn , independent and identically distributed with a common distribution function FX . Let us also consider the sequence of r.v.s Z1 , Z2 , . . . , Zn , deﬁned by: Zk = max(X1 , . . . , Xk ).

k = 1, . . . , n

The d.f. for Zn is given by: F (n) (z) = Pr[max(X1 , . . . , Xn ) ≤ z] = Pr([X1 ≤ z] ∩ · · · ∩ [Xn ≤ z]) = Pr[X1 ≤ z] · · · · ·Pr[Xn ≤ z] = FXn (z) Note When one wishes to study the distribution of an extreme Zn for a large number n of r.v.s, the precise formula established by us is not greatly useful. In fact, we need to have a result that does not depend essentially on the d.f., as Fx is not necessarily known with any great accuracy. In addition, when n tends towards the inﬁnite, the r.v. Zn tends towards a degenerate r.v., as: 0 si FX (z) < 1 lim F (n) (z) = 1 si FX (z) = 1 n→∞ It was for this reason that asymptotic extreme value theory was developed.

4.2 ASYMPTOTIC RESULTS Asymptotic extreme value theory originates in the work of R. A. Fisher,1 and the problem was fully solved by B. Gnedenko.2 4.2.1 Extreme value theorem The extreme value theorem states that under the hypothesis of independence and equal distribution of r.v.s X1 , X2 , . . . , Xn , if there are also two sequences of coefﬁcients αn > 0 1 Fisher R. A. and Tippett L. H. C., Limiting forms of the frequency distribution of the largest or smallest member of a sample, Proceedings of the Cambridge Philosophical Society, Vol. 24, 1978, pp. 180–90. 2 Gnedenko B. V., On the distribution limit of the maximum term of a random series, Annals of Mathematics, Vol. 44, 1943, pp. 423–53.

366

Asset and Risk Management

and βn (n = 1, 2, . . .) so that the limit (for n → ∞) of the random variable Yn =

max(X1 , . . . , Xn ) − βn αn

is not degenerate, it will admit a law of probability deﬁned by a distribution function that must be one of the following three forms: (z) = exp[−e−z ] 0 (z) = exp[−z−k ] exp[−(−z)k ] (z) = 1

z≤0 z>0 z 0 x→∞ 1 − FX (ux) 3

That is, the value that corresponds to the maximum of the probability density.

Extreme Value Theory

367

The laws covered by this description are the laws for which the tails decrease less rapidly than the exponential, such as Student’s law, Cauchy’s law and stable Pareto’s law. Finally, the attraction domain of Weibull’s law is characterised by the presence of a number x0 for which FX (x0 ) = 1 and FX (x) < 1 when x < x0 , and the presence of a positive parameter k, so that 1 − FX (x0 + ux) = uk x→0− 1 − FX (x0 + x) lim

∀u > 0

This category contains the bounded support distributions, such as the uniform law. 4.2.3 Generalisation A. F. Jenkinson has been able to provide Gnedenko’s result with a uniﬁed form. In fact, if for Fr´echet’s law it is suggested that z = 1 − τy and k = −1/τ , we will ﬁnd, when τ < 0 and we obtain exp[−z−k ] = exp[−(1 − τy)1/τ ] a valid relation for z > 0, that is, y > 1/τ (for the other values of y, the r.v. takes the value 0). In the same way, for Weibull’s law , it is suggested that z = τy − 1 and k = 1/τ . We then ﬁnd, when τ > 0 and we obtain exp[−(−z)k ] = exp[−(1 − τy)1/τ ] a valid relation for z < 0, that is, y < 1/τ (for the other values of y, the r.v. takes the value 1). We therefore have the same analytical expression in both cases. We will also see that the same applies to Gumbel’s law . By passage to the limit, we can easily ﬁnd: y n lim exp[−(1 − τy)1/τ ] = exp − lim 1 − = exp[−e−y ] n→±∞ τ →0± n which is the expression set out in Gumbel’s law. To sum up: by paring a(y) = exp[−(1 − τy)1/τ ], the d.f. FY of the extreme limit distribution is written as follows: 0 si y ≤ 1/τ (Fr´echet’s law). If t < 0, FY (y) = a(y) si y > 1/τ If t = 0, FY (y) = a(y) a(y) If t > 0, FY (y) = 1

∀y (Gumbel’s Law). si y < 1/τ si y ≥ 1/τ

This, of course, is the result shown in Section 7.4.2.

(Weibull’s law).

Appendix 5 Canonical Correlations 5.1 GEOMETRIC PRESENTATION OF THE METHOD The aim of canonical analysis 1 is to study the linear relations that exist between the static spreads and dynamic spreads observed on the same sample. We are looking for a linear combination of static spreads and a linear combination of dynamic spreads that are as well correlated as possible. We therefore have two sets of characters: x1 , x2 , . . . , xp on one hand and y1 , y2 , . . . , yq on the other hand. In addition, it is assumed that the characters are centred, standardized and observed for in the same number n of individuals. Both sets of characters generate the respective associated vectorial subspaces, V1 and V2 of R. We also introduce the matrices X and Y with respective formats (n, p) and (n, q), in which the various columns are observations relative to the different characters. As the characters are centred, the same will apply to the vectorial subspaces. Geometrically, therefore, the problem of canonical analysis can be presented as follows: we need to ﬁnd ξ ∈ εV1 and η ∈ V2 , so that cos2 (ξ, η) = r 2 (ξ, η) is maximised.

5.2 SEARCH FOR CANONICAL CHARACTERS Let us assume that the characters ξ 1 and η1 are solutions to the problem – see Figure A5.1. The angle between ξ 1 and η1 does not depend on their norm (length). In fact, V1 and V2 are invariable when the base vectors are multiplied by a scalar and therefore cos2 (ξ 1 , η1 ) does not depend on the base vector norms. It is then assumed that ||ξ 1 || = ||η1 || = 1. The character η1 must be co-linear with the orthogonal projection of ξ 1 over V2 , which is the vector of V2 that makes a minimum angle with ξ 1 . This condition is written as A2 ξ 1 = r1 η1 where r 2 1 = cos2 (ξ 1 , η1 ) and A2 is the operator of the orthogonal projection on V2 . In the same way, we have A1 η1 = r1 ξ 1 . These two relations produce the system

A1 A2 ξ 1 = λ1 ξ 1 A2 A1 η1 = λ1 η1

where λ1 = r 2 1 = cos2 (ξ 1 , η1 ). It is therefore deduced that ξ 1 and η1 are respectively the eigenvectors of operators A1 A2 and A2 A1 associated with the same highest eigenvalue λ1 , this value being equal 1 A detailed description of this method and other multivariate statistical methods, is found in Chatﬁeld C. and Collins A. J., Introduction to Multivariate Analysis, Chapman & Hall, 1980. Saporta G., Probabilities, Data Analysis and Statistics, Technip, 1990.

370

Asset and Risk Management x1

V1

A1h1

A2x1

h1 V2

Figure A5.1 Canonical correlations

to their squared cosine or their squared correlation. The characters ξ 1 and η1 are deduced from each other by a simple linear application: 1 1 η1 = √ A2 ξ 1 and ξ 1 = √ A1 η1 . λ1 λ1 The following canonical characters are the eigenvectors of A1 A2 , associated with the eigenvalue λ1 sorted in decreasing order. If the ﬁrst canonical characters of order i are written ξ i = a1 x1 + · · · + aP xP and ηi = b1 y1 + · · · + bq yq (in other words, in terms of matrix, ξ i = Xa and ηi = Y b) and if the diagonal matrix of the weights is expressed as D, it can be shown that: 1 b = √ (Y t DY )−1 (Xt DY )t a λi 1 a = √ (Xt DX)−1 (Xt DY )b λi

Appendix 6 Algebraic Presentation of Logistic Regression Let Y be the binary qualitative variable (0 for periods of equilibrium, 1 for breaks in equilibrium) that we wish to explain by the explanatory quantitative variables X1,p, . The model looks to evaluate the following probabilities: pi = Pr [Y = 1] X1 = xi1 ; . . . ; Xp = xip The logistic regression model1 is a nonlinear regression model. Here, the speciﬁcation for the model is based on the use of a logistic function: p G(p) = ln 1−p In this type of model, it is considered that there is linear dependency between G(pi ) and the explanatory variables: G(pi ) = β0 + β1 xi1 + · · · + βp xip where, β0 , β1 , . . . , βp are the unknown parameters to be estimated. By introducing the vector β for these coefﬁcients, so that 1 xi1 zi = . . . xip the binomial probability can be expressed in the form pi =

eβzi 1 + eβzi

The method for estimating the parameters is that of maximising the probability function through successive iterations. This probability function is the product of the statistical density relative to each individual member: L(β) =

{i:yi =1}

eβzi ·

1 1 + eβzi {i:y =0} i

1 A detailed description of this method and other multivariate statistical methods, is found in Chatﬁeld C. and Collins A. J., Introduction to Multivariate Analysis, Chapman & Hall, 1980. Saporta G., Probabilities, Data Analysis and Statistics, Technip, 1990.

Appendix 7 Time Series Models: ARCH-GARCH and EGARCH 7.1 ARCH-GARCH MODELS The ARCH-GARCH (auto-regressive conditional heteroscedasticity or generalised autoregressive conditional heteroscedasticity) models were developed by Engel1 in 1982 in the context of studies of macroeconomic data. The ARCH model allows speciﬁc modelling of variance in terms of error. Heteroscedasticity can be integrated by introducing an exogenous variable x, which provides for variance in the term of error. This modelling can take one of the following forms: yt = et · xt−1

yt = et · yt−1

or

Here, et is a white noise (sequence of r.v.s not correlated, with zero mean and the same variance). In order to prevent the variance in this geometric series from being inﬁnite or zero, it is preferable to take the following formulations: yt = a0 +

p

ai yt−i + εt

i=1

with:

E(εt ) = 0 var(εt ) = γ +

q

2 αi εt−i

i=1

This type of model is generally expressed as AR(p) − ARCH(q) or ARCH(p, q).

7.2 EGARCH MODELS These models, unlike the ARCH-GARCH model, allow the conditional variance to respond to a fall or rise in the series in different ways. This conﬁguration is of particular interest in generally increasing ﬁnancial series. An example of this type of model is Nelson’s:2 √ xt = µ + ht χt √ ln ht = α + β ln ht−1 + δ |χt | − 2/π + γ χt−1 Here, χt /It−1 follows a standard normal law (It−1 representing the information available at the moment t − 1). 1 Engel R. F., Auto-regressive conditional heteroscedasticity with estimate of the variance of United Kingdom inﬂation, Econometrica No. 50, 1982, pp. 987–1003. A detailed presentation of the chronological series models will also be found in Droebske J. J, Fichet B. and Tassi P., Mod´elisation ARCH, th´eorie statistique et applications dans le domaine de la ﬁnance, ´ Editions ULB, 1994; and in Gourjeroux C., Mod`eles ARCH et applications ﬁnanci`eres, Economica, 1992. 2 Nelson D. B., Conditional heteroscedasticity in asset returns: a new approach, Econometrica No. 39, 1991, pp. 347–70.

Appendix 8 Numerical Methods for Solving Nonlinear Equations1 An equation is said to be nonlinear when it involves terms of degree higher than 1 in the unknown quantity. These terms may be polynomial or capable of being broken down into Taylor series of degrees higher than 1. Nonlinear equations cannot in general be solved analytically. In this case, therefore, the solutions of the equations must be approached using iterative methods. The principle of these methods of solving consists in starting from an arbitrary point – the closest possible point to the solution sought – and involves arriving at the solution gradually through successive tests. The two criteria to take into account when choosing a method for solving nonlinear equations are: • Method convergence (conditions of convergence, speed of convergence etc.). • The cost of calculating of the method.

8.1 GENERAL PRINCIPLES FOR ITERATIVE METHODS 8.1.1 Convergence Any nonlinear equation f (x) = 0 can be expressed as x = g(x). If x0 constitutes the arbitrary starting point for the method, it will be seen that the solution x ∗ for this equation, x ∗ = g(x ∗ ), can be reached by the numerical sequence: xn+1 = g(xn )

n = 0, 1, 2, . . .

This iteration is termed a Picard process and x ∗ , the limit of the sequence, is termed the ﬁxed iterative point. In order for the sequence set out below to tend towards the solution of the equation, it has to be guaranteed that this sequence will converge. A sufﬁcient condition for convergence is supplied by the following theorem: if x = g(x) has a solution a within the interval I = [a − b; a + b] = {x : |x − a| ≤ b} and if g(x) satisﬁes Lipschitz’s condition: ∃L ∈ [0; 1[ : ∀x ∈ I,

|g(x) − g(a)| ≤ L|x − a|

Then, for every x0 ∈ I : • all the iterated values xn will belong to I; • the iterated values xn will converge towards a; • the solution a will be unique within interval I . 1 This appendix is mostly based on Litt F. X., Analyse num´erique, premi`ere partie, ULG, 1999. Interested readers should also read: Burden R. L. and Faires D. J., Numerical Analysis, Prindle, Weber & Schmidt, 1981; and Nougier J. P., M´ethodes de calcul num´erique, Masson, 1993.

376

Asset and Risk Management

We should also show a case in which Lipschitz’s condition is satisﬁed: it is sufﬁcient that for every x ∈ I , g (x) exists and is such that |g (x)| ≤ m with m < 1. 8.1.2 Order of convergence It is important to choose the most suitable of the methods that converge. At this level, one of the most important criteria to take into account is the speed or order of convergence. Thus the sequence xn , deﬁned above, and the error en = xn − a. If there is a number p and a constant C > 0 so that lim

n→∞

|en+1 | =C |en |p

p will then be termed the order of convergence for the sequence and C is the asymptotic error constant. When the speed of convergence is unsatisfactory, it can be improved by the Aitken extrapolation,2 which is a convergence acceleration process. The speed of convergence of this extrapolation is governed by the following result: • If Picard’s iterative method is of the order p, the Aitken extrapolation will be of the order 2p − 1. • If Picard’s iterative method is of the ﬁrst order, Aitken’s extrapolation will be of the second order in the case of a simple solution and of the ﬁrst order in the case of a multiple solution. In this last case, the asymptotic error constant is equal to 1 − 1/m where m is the multiplicity of the solution. 8.1.3 Stop criteria As stated above, the iterative methods for solving nonlinear equations supply an approached solution to the solution of the equation. It is therefore essential to be able to estimate the error in the solution. Working on the mean theorem: f (xn ) = (xn − a)f (ξ ), with ξ ∈ [xn ; a] we can deduce the following estimation for the error: |xn − a| ≤

|f (xn )| , M

|f (xn )| ≥ M,

x ∈ [xn ; a]

In addition, the rounding error inherent in every numerical method limits the accuracy of the iterative methods to: δ εa = f (a) 2

We refer to Litt F. X., Analyse num´erique, premi`ere partie, ULG 1999, for further details.

Numerical Methods for Solving Nonlinear Equations

377

in which δ represents an upper boundary for the rounding error in iteration n: δ ≥ |δn | = f (xn ) − f (xn ) f (xn ) represents the calculated value for the function. Let us now assume that we wish to determine a solution a with a degree of precision ε. We could stop the iterative process on the basis of the error estimation formula. These formulae, however, require a certain level of information on the derivative f (x), information that is not easy to obtain. On the other hand, the limit speciﬁcation εa will not generally be known beforehand.3 Consequently, we are running the risk of ε, the accuracy level sought, never being reached, as it is better than the limit precision εa (ε < εa ). In this case, the iterative process will carry on indeﬁnitely. This leads us to accept the following stop criterion:

|xn − xn−1 | < ε |xn+1 − xn | ≥ |xn − xn−1 |

This means that the iteration process will be stopped when the iteration n produces a variation in value less than that of the iteration n + 1. The value of ε will be chosen in a way that prevents the iteration from stopping too soon.

8.2 PRINCIPAL METHODS Deﬁning an iterative method is based ultimately on deﬁning the function h(x) of the equation x = g(x) ≡ x − h(x)f (x). The choice of this function will determine the order of the method. 8.2.1 First order methods The simplest choice consists of taking h(x) = m = constant = 0. 8.2.1.1 Chord method This deﬁnes the chord method (Figure A8.1), for which the iteration is xn+1 = xn − mf (xn ).

y = f(x)

y = x/m x1 x2

x0

Figure A8.1 Chord method 3

This will in effect require knowledge of f (a), when a is exactly what is being sought.

378

Asset and Risk Management y = f(x)

x0

x2 x1

Figure A8.2 Classic chord method

The sufﬁcient convergence condition (see Section A8.1.1) for this method is 0 < mf (x) < 2, in the neighbourhood of the solution. In addition, it can be shown that |en+1 | limn→∞ = |g (a)| = 0. |en | The chord method is therefore clearly a ﬁrst-order method (see Section A8.1.2). 8.2.1.2 Classic chord method It is possible to improve the order of convergence by making m change at each iteration: xn+1 = xn − mn f (xn ) The classic chord method (Figure A8.2) takes as the value for mn the inverse of the slope for the straight line deﬁned by the points (xn−1 ; f (xn−1 )) and (xn ; f (xn )): xn+1 = xn −

xn − xn−1 f (xn ) f (xn ) − f (xn−1 )

This method will converge if f (a) = 0 and f (x) is continuous in the neighbourhood of a. In addition, it can be shown that |en+1 | = lim n→∞ |en |p for p = 12 (1 + the method.

√

f (a) 2f (a)

1/p = 0

5) = 1.618 . . . > 1, which greatly improves the order of convergence for

8.2.1.3 Regula falsi method The regula falsi method (Figure A8.3) takes as the value for mn the inverse of the slope for the straight line deﬁned by the points (xn ; f (xn )) and (xn ; f (xn )) where n is the highest index for which f (xn ).f (xn ) < 0: xn+1 = xn −

xn − xn f (xn ) f (xn ) − f (xn )

Numerical Methods for Solving Nonlinear Equations

379

y = f (x)

x2

x1

x0

Figure A8.3 Regula falsi method

This method always converges when f (x) is continuous. On the other hand, the convergence of this method is linear and therefore less effective than the convergence of the classic chord method. 8.2.2 Newton–Raphson method If, in the classic chord method, we choose mn so that g (xn ) = 0, that is, f (xn ) = 1/mn , we will obtain a second-order iteration. The method thus deﬁned, f (xn ) xn+1 = xn − f (xn ) is known as the Newton-Raphson method (Figure A8.4). It is clearly a second-order method, as 1 f (a) |en+1 | lim = = 0 n→∞ |en |2 2 f (a) The Newton–Raphson method is therefore rapid insofar as the initial iterated value is not too far from the solution sought, as global convergence is not assured at all. A convergence criterion is therefore given for the following theorem. Assume that f (x) = 0 and that f (x) does not change its sign within the interval [a; b] and f (a).f (b) < 0. If, furthermore, f (a) < b − a and f (b) < b − a f (b) f (a) the Newton–Raphson method will converge at every initial arbitrary point x0 that belongs to [a; b]. y = f (x) f(x1)/f ′(x1)

x0

x2 f(x0)/f′(x0)

Figure 8.4 Newton–Raphson method

x1

380

Asset and Risk Management

The classic chord method, unlike the Newton–Raphson method, requires two initial approximations but only involves one new function evaluation at each subsequent stage. The choice between the classic chord method and the Newton–Raphson method will therefore depend on the effort of calculation required for evaluation f (x). Let us assume that the effort of calculation required for evaluation of f (x) is θ times the prior effort of calculation for f (x). Given what has been said above, we can establish that the effort of calculation will be the same for the two methods if: √ 1+θ 1 1+ 5 = in which p = log 2 log p 2 is the order of convergence in the classic chord method. In consequence: • If θ > (log 2/ log p) − 1 ∼ 0.44 → the classic chord method will be used. • If θ ≤ (log 2/ log p) − 1 ∼ 0.44 → the Newton–Raphson method will be used. 8.2.3 Bisection method The bisection method is a linear convergence method and is therefore slow. Use of the method is, however, justiﬁed by the fact that it converges overall, unlike the usual methods (especially the Newton–Raphson and classic chord methods). This method will therefore be used to bring the initial iterated value of the Newton–Raphson or classic chord method to a point sufﬁciently close to the solution to ensure that the methods in question converge. Let us assume therefore that f (x) is continuous in the interval [a0 ; b0 ] and such that4 f (a0 ).f (b0 ) < 0. The principle of the method consists of putting together a converging sequence of bracketed intervals, [a1 ; b1 ] ⊃ [a2 ; b2 ] ⊃ [a3 ; b3 ] ⊃ . . . , all of which contain a solution of the equation f (x) = 0. If it is assumed that5 f (a0 ) < 0 and f (b0 ) > 0, the intervals Ik = [ak ; bk ] will be put together by recurrence on the basis of Ik−1 . [mk ; bk−1 ] if f (mk ) < 0 [ak ; bk ] = [ak−1 ; mk ] if f (mk ) > 0 Here, mk = (ak−1 + bk−1 )/2. One is thus assured that f (ak ) < 0 and f (bk ) > 0, which guarantees convergence. The bisection method is not a Picard iteration, but the order of convergence can be deter|en+1 | 1 mined, as limn→∞ = . The bisection method is therefore a ﬁrst-order method. |en | 2

8.3 NONLINEAR EQUATION SYSTEMS We have a system of n nonlinear equations of n unknowns: fi (x1 , x2 , . . . , xn ) = 0 i = 1, 2, . . . , n. Here, in vectorial notation, f (x) = 0. The solution to the system is an n-dimensional vector a. 4 5

This implies that f (x) has a root within this interval. This is not restrictive in any way, as it corresponds to f (x) = 0 or −f (x) = 0, x ∈ [a0 ; b0 ], depending on the case.

Numerical Methods for Solving Nonlinear Equations

381

8.3.1 General theory of n-dimensional iteration n-dimensional iteration general theory is similar to the one-dimensional theory. The above equation can thus be expressed in the form: x = g(x) ≡ x − A(x)f (x) where A is a square matrix of nth order. Picard’s iteration is always deﬁned as xk+1 = g(xk )

k = 0, 1, 2 etc.

and the convergence theorem for Picard’s iteration remains valid in n dimensions. In addition, if the Jacobian matrix J(x), deﬁned by [J(x)]ij = gj (x) xi is such that for every x ∈ I , ||J(x)|| ≤ m for a norm compatible with m < 1, Lipschitz’s condition is satisﬁed. The order of convergence is deﬁned by lim

k→∞

||ek+1 || =C ||ek ||p

where C is the constant for the asymptotic error. 8.3.2 Principal methods If one chooses a constant matrix A as the value for A(x), the iterative process is the generalisation in n dimensions of the chord method. If the inverse of the Jacobian matrix of f is chosen as the value of A(x), we will obtain the generalisation in n dimensions of the Newton–Raphson method. Another approach to solving the equation f (x) = 0 involves using the i th equation to determine the (i + 1)th component. Therefore, for i = 1, 2, . . . , n, the following equations will be solved in succession: (k+1) (k) fi (x1(k+1) , . . . , xi−1 , xi , xi+1 , . . . , xn(k) ) = 0

with respect to xi . This is known as the nonlinear Gauss–Seidel method.

Bibliography CHAPTER 1 The Bank for International Settlements, Basle Committee for Banking Controls, Sound Practices for the Management and Supervision of Operational Risk, Basle, February 2003. The Bank for International Settlements, Basle Committee for Banking Controls, The New Basle Capital Accord, Basle, January 2001. The Bank for International Settlements, Basle Committee for Banking Controls, The New Basle Capital Accord: An Explanatory Note, Basle, January 2001. The Bank for International Settlements, Basle Committee for Banking Controls, Vue d’ensemble du Nouvel accord de Bale ˆ sur les fonds propres, Basle, January 2001. Cruz M. G., Modelling, Measuring and Hedging Operational Risk, John Wiley & Sons, Ltd, 2002. Hoffman D. G., Managing Operational Risk: 20 Firm-Wide Best Practice Strategies, John Wiley & Sons, Inc, 2002. Jorion P., Financial Risk Manager Handbook (Second Edition), John Wiley & Sons, Inc, 2003. Marshall C., Measuring and Managing Operational Risks in Financial Institutions, John Wiley & Sons, Inc, 2001.

CHAPTER 2 The Bank for International Settlements, BIS Quarterly Review, Collateral in Wholesale Financial Markets, Basle, September 2001. The Bank for International Settlements, Basle Committee for Banking Controls, Internal Audit in Banks and the Supervisor’s Relationship with Auditors, Basle, August 2001. The Bank for International Settlements, Basle Committee for Banking Controls, Sound Practices for Managing Liquidity in Banking Organisations, Basle, February 2000. The Bank for International Settlements, Committee on the Global Financial System, Collateral in Wholesale Financial Markets: Recent Trends, Risk Management and Market Dynamics, Basle, March 2001. Moody’s, Moody’s Analytical Framework for Operational Risk Management of Banks, Moody’s, January 2003.

CHAPTER 3 Bachelier L., Th´eorie de la sp´eculation, Gauthier-Villars, 1900. Bechu T. and Bertrand E., L’Analyse Technique, Economica, 1998. Binmore K., Jeux et th´eorie des jeux, De Boeck & Larcier, 1999. Brealey R. A. and Myers S. C., Principles of Corporate Finance, McGraw-Hill, 1991. Broquet C., Cobbaut R., Gillet R., and Vandenberg A., Gestion de Portefeuille, De Boeck, 1997. Chen N. F., Roll R., and Ross S. A., Economic forces of the stock market, Journal of Business, No. 59, 1986, pp. 383–403. Copeland T. E. and Weston J. F., Financial Theory and Corporate Policy, Addison-Wesley, 1988.

384

Bibliography

´ Devolder P., Finance stochastique, Editions ULB, 1993. Dhrymes P. J., Friend I., and Gultekin N. B., A critical re-examination of the empirical evidence on the arbitrage pricing theory, Journal of Finance, No. 39, 1984, pp. 323–46. Eeckhoudt L. and Gollier C., Risk, Harvester Wheatsheaf, 1995. Elton E. and Gruber M., Modern Portfolio Theory and Investment Analysis, John Wiley & Sons, Inc, 1991. Elton E., Gruber M., and Padberg M., Optimal portfolios from single ranking devices, Journal of Portfolio Management, Vol. 4, No. 3, 1978, pp. 15–19. Elton E., Gruber M., and Padberg M., Simple criteria for optimal portfolio selection, Journal of Finance, Vol. XI, No. 5, 1976, pp. 1341–57. Elton E., Gruber M., and Padberg M., Simple criteria for optimal portfolio selection: tracing out the efﬁcient frontier, Journal of Finance, Vol. XIII, No. 1, 1978, pp. 296–302. Elton E., Gruber M., and Padberg M., Simple criteria for optimal portfolio selection with upper bounds, Operation Research, 1978. Fama E. and Macbeth J., Risk, return and equilibrium: empirical tests, Journal of Political Economy, Vol. 71, No. 1., 1974, pp. 607–36. Fama E. F., Behaviour of stock market prices, Journal of Business, Vol. 38, 1965, pp. 34–105. Fama E. F., Efﬁcient capital markets: a review of theory and empirical work, Journal of Finance, Vol. 25, 1970. Fama E. F., Random walks in stock market prices, Financial Analysis Journal, 1965. Gillet P., L’efﬁcience des march´es ﬁnanciers, Economica, 1999. Gordon M. and Shapiro E., Capital equipment analysis: the required rate proﬁt, Management Science, Vol. 3, October 1956. Grinold C. and Kahn N., Active Portfolio Management, McGraw-Hill, 1998. Lintner J., The valuation of risky assets and the selection of risky investments, Review of Economics and Statistics, Vol. 47, 1965, pp. 13–37. Markowitz H., Mean-Variance Analysis in Portfolio Choice and Capital Markets, Blackwell Publishers, 1987. Markowitz H., Portfolio selection, Journal of Finance, Vol. 7, No. 1, 1952, pp. 419–33. Mehta M. L., Random Matrices, Academic Press, 1996. Miller M. H. and Modigliani F., Dividend policy, growth and the valuation of shares, Journal of Business, 1961. Morrison D., Multivariate Statistical Methods, McGraw-Hill, 1976. Roger P., L’´evaluation des Actifs Financiers, de Boeck, 1996. Ross S. A., The arbitrage theory of capital asset pricing, Journal of Economic Theory, 1976, pp. 343–62. Samuelson P., Mathematics on Speculative Price, SIAM Review, Vol. 15, No. 1, 1973. Saporta G., Probabilit´es, Analyse des Donn´ees et Statistique, Technip, 1990. Sharpe W., A simpliﬁed model for portfolio analysis, Management Science, Vol. 9, No. 1, 1963, pp. 277–93. Sharpe W., Capital asset prices, Journal of Finance, Vol. 19, 1964, pp. 425–42. Von Neumann J. and Morgenstern O., Theory of Games and Economic Behaviour, Princeton University Press, 1947.

CHAPTER 4 Bierwag G., Kaufmann G., and Toevs A (Eds.), Innovations in Bond Portfolio Management: Duration Analysis and Immunisation, JAI Press, 1983. Bisi`ere C., La Structure par Terme des Taux d’int´erˆet, Presses Universitaires de France, 1997. Brennan M. and Schwartz E., A continuous time approach to the pricing of bonds, Journal of Banking and Finance, Vol. 3, No. 2, 1979, pp. 133–55. Colmant B., Delfosse V., and Esch L., Obligations, les notions ﬁnanci`eres essentielles, Larcier, 2002. Cox J., Ingersoll J., and Ross J., A theory of the term structure of interest rates, Econometrica, Vol. 53, No. 2, 1985, pp. 385–406. Fabozzi J. F., Bond Markets, Analysis and Strategies, Prentice-Hall, 2000.

Bibliography

385

Heath D., Jarrow R., and Morton A., Bond Pricing and the Term Structure of Interest Rates: a New Methodology, Cornell University, 1987. Heath D., Jarrow R., and Morton A., Bond pricing and the term structure of interest rates: discrete time approximation, Journal of Financial and Quantitative Analysis, Vol. 25, 1990, pp. 419–40. Ho T. and Lee S., Term structure movement and pricing interest rate contingent claims, Journal of Finance, Vol. 41, No. 5, 1986, pp. 1011–29. Macauley F., Some Theoretical Problems Suggested by the Movements of Interest Rates, Bond Yields and Stock Prices in the United States since 1856, New York, National Bureau of Economic Research, 1938, pp. 44–53. Merton R., Theory of rational option pricing, Bell Journal of Economics and Management Science, Vol. 4, No. 1, 1973, pp. 141–83. Ramaswamy K. and Sundaresan M., The valuation of ﬂoating-rates instruments: theory and evidence, Journal of Financial Economics, Vol. 17, No. 2, 1986, pp. 251–72. Richard S., An arbitrage model of the term structure of interest rates, Journal of Financial Economics, Vol. 6, No. 1, 1978, pp. 33–57. Schaefer S. and Schwartz E., A two-factor model of the term structure: an approximate analytical solution, Journal of Financial and Quantitative Analysis, Vol. 19, No. 4, 1984, pp. 413–24. Vasicek O., An equilibrium characterisation of the term structure, Journal of Financial Economics, Vol. 5, No. 2, 1977, pp. 177–88.

CHAPTER 5 Black F. and Scholes M., The pricing of options and corporate liabilities, Journal of Political Economy, Vol. 81, 1973, pp. 637–59. Colmant B. and Kleynen G., Gestion du risque de taux d’int´eret ˆ et instruments ﬁnanciers d´eriv´es, Kluwer 1995. Copeland T. E. and Wreston J. F., Financial Theory and Corporate Policy, Addison-Wesley, 1988. Courtadon G., The pricing of options on default-free bonds, Journal of Financial and Quantitative Analysis, Vol. 17, 1982, pp. 75–100. Cox J., Ross S., and Rubinstein M., Option pricing: a simpliﬁed approach, Journal of Financial Economics, No. 7, 1979, pp. 229–63. ´ Devolder P., Finance stochastique, Editions ULB, 1993. Garman M. and Kohlhagen S., Foreign currency option values, Journal of International Money and Finance, No. 2, 1983, pp. 231–7. Hicks A., Foreign Exchange Options, Woodhead, 1993. Hull J. C., Options, Futures and Others Derivatives, Prentice Hall, 1997. Krasnov M., Kisselev A., Makarenko G., and Chikin E., Math`ematiques sup´erieures pour ing´enieurs et polytechniciens, De Boeck, 1993. Reilly F. K. and Brown K. C., Investment Analysis and Portfolio Management, South-Western, 2000. Rubinstein M., Options for the undecided, in From Black–Scholes to Black Holes, Risk Magazine, 1992. Sokolnikoff I. S. and Redheffer R. M., Mathematics of Physics and Modern Engineering, McGrawHill, 1966.

CHAPTER 6 Blattberg R. and Gonedes N., A comparison of stable and Student descriptions as statistical models for stock prices, Journal of Business, Vol. 47, 1974, pp. 244–80. Fama E., Behaviour of stock market prices, Journal of Business, Vol. 38, 1965, pp. 34–105. Johnson N. L. and Kotz S., Continuous Univariate Distribution, John Wiley & Sons, Inc, 1970. Jorion P., Value at Risk, McGraw-Hill, 2001. Pearson E. S. and Hartley H. O., Biometrika Tables for Students, Biometrika Trust, 1976.

CHAPTER 7 Abramowitz M. and Stegun A., Handbook of Mathematical Functions, Dover, 1972. Chase Manhattan Bank NA, The Management of Financial Price Risk, Chase Manhattan Bank NA, 1995.

386

Bibliography

Chase Manhattan Bank NA, Value at Risk, its Measurement and Uses, Chase Manhattan Bank NA, undated. Chase Manhattan Bank NA, Value at Risk, Chase Manhattan Bank NA, 1996. Danielsson J. and De Vries C., Beyond the Sample: Extreme Quantile and Probability Estimation, Mimeo, Iceland University and Tinbergen Institute Rotterdam, 1997. Danielsson J. and De Vries C., Tail index and quantile estimation with very high frequency data, Journal of Empirical Finance, No. 4, 1997, pp. 241–57. Danielsson J. and De Vries C., Value at Risk and Extreme Returns, LSE Financial Markets Group Discussion Paper 273, London School of Economics, 1997. Embrechts P. Kl¨uppelberg C., and Mikosch T., Modelling External Events for Insurance and Finance, Springer Verlag, 1999. Galambos J., Advanced Probability Theory, M. Dekker, 1988, Section 6.5. Gilchrist W. G., Statistical Modelling with Quantile Functions, Chapman & Hall/CRC, 2000. Gnedenko B. V., On the limit distribution of the maximum term in a random series, Annals of Mathematics, Vol. 44, 1943, pp. 423–53. Gourieroux C., Mod`eles ARCH et applications ﬁnanci`eres, Economica, 1992. Gumbel E. J., Statistics of Extremes, Columbia University Press, 1958. Hendricks D., Evaluation of Value at Risk Models using Historical Data, FRBNY Policy Review, 1996, pp. 39–69. Hill B. M., A simple general approach to inference about the tail of a distribution, Annals of Statistics, Vol. 46, 1975, pp. 1163–73. Hill I. D., Hill R., and Holder R. L, Fitting Johnson curves by moments (Algorithm AS 99), Applied Statistics, Vol. 25, No. 2, 1976, pp. 180–9. Jenkinson A. F., The frequency distribution of the annual maximum (or minimum) values of meteorological elements, Quarterly Journal of the Royal Meteorological Society, Vol. 87, 1955, pp. 145–58. Johnson N. L., Systems of frequency curves generated by methods of translation, Biometrika, Vol. 36, 1949, pp. 1498–575. Longin F. M., From value at risk to stress testing: the extreme value approach, Journal of Banking and Finance, No. 24, 2000, pp. 1097–130. Longin F. M., Extreme Value Theory: Introduction and First Applications in Finance, Journal de la Soci´et´e Statistique de Paris, Vol. 136, 1995, pp. 77–97. Longin F. M., The asymptotic distribution of extreme stock market returns, Journal of Business, No. 69, 1996, pp. 383–408. McNeil A. J., Estimating the Tails of Loss Severity Distributions using Extreme Value Theory, Mimeo, ETH Zentrum Zurich, 1996. McNeil A. J., Extreme value theory for risk managers, in Internal Modelling and CAD II, Risk Publications, 1999, pp. 93–113. Mina J. and Yi Xiao J., Return to RiskMetrics: The Evolution of a Standard, RiskMetrics, 2001. Morgan J. P., RiskMetrics: Technical Document, 4th Ed., Morgan Guaranty Trust Company, 1996. Pickands J., Statistical inference using extreme order statistics, Annals of Statistics, Vol. 45, 1975, pp. 119–31. Reiss R. D. and Thomas M., Statistical Analysis of Extreme Values, Birkhauser Verlag, 2001. Rouvinez C., Going Greek with VAR, Risk Magazine, February 1997, pp. 57–65. Schaller P., On Cash Flow Mapping in VAR Estimation, Creditanstalt-Bankverein, CA RISC199602237, 1996. Stambaugh V., Value at Risk, not published, 1996. Vose D., Quantitative Risk Analysis, John Wiley & Sons, Ltd, 1996.

CHAPTER 9 Lopez T., D´elimiter le risque de portefeuille, Banque Magazine, No. 605, July–August 1999, pp. 44–6.

CHAPTER 10 Broquet C., Cobbaut R., Gillet R., and Vandenberg A., Gestion de Portefeuille, De Boeck, 1997. Burden R. L. and Faires D. J., Numerical Analysis, Prindle, Weber & Schmidt, 1981.

Bibliography

387

Esch L., Kieffer R., and Lopez T., Value at Risk – Vers un risk management moderne, De Boeck, 1997. Litt F. X., Analyse num´erique, premi`ere partie, ULG, 1999. Markowitz H., Mean-Variance Analysis in Portfolio Choice and Capital Markets, Basil Blackwell, 1987. Markowitz H., Portfolio Selection: Efﬁcient Diversiﬁcation of Investments, Blackwell Publishers, 1991. Markowitz H., Portfolio selection, Journal of Finance, Vol. 7, No. 1, 1952, pp. 77–91. Nougier J-P., M´ethodes de calcul num´erique, Masson, 1993. Vauthey P., Une approche empirique de l’optimisation de portefeuille, Eds. Universitaires Fribourg Suisse, 1990.

CHAPTER 11 Chen N. F., Roll R., and Ross S. A., Economic forces of the stock market, Journal of Business, No. 59, 1986, pp. 383–403. Dhrymes P. J., Friends I., and Gultekin N. B., A critical re-examination of the empirical evidence on the arbitrage pricing theory, Journal of Finance, No. 39, 1984, pp. 323–46. Ross S. A., The arbitrage theory of capital asset pricing, Journal of Economic Theory, 1976, pp. 343–62.

CHAPTER 12 Ausubel L., The failure of competition in the credit card market, American Economic Review, vol. 81, 1991, pp. 50–81. Cooley W. W. and Lohnes P. R., Multivariate Data Analysis, John Wiley & Sons, Inc, 1971. Damel P., La mod´elisation des contrats bancaires a` taux r´evisable: une approche utilisant les corr´elations canoniques, Banque et March´es, mars avril, 1999. Damel P., L’apport de replicating portfolio ou portefeuille r´epliqu´e en ALM: m´ethode contrat par contrat ou par la valeur optimale, Banque et March´es, mars avril, 2001. Heath D., Jarrow R., and Morton A., Bond pricing and the term structure of interest rates: a new methodology for contingent claims valuation, Econometrica, vol. 60, 1992, pp. 77–105. Hotelling H., Relation between two sets of variables, Biometrica, vol. 28, 1936, pp. 321–77. Hull J. and White A., Pricing interest rate derivative securities, Review of Financial Studies, vols 3 & 4, 1990, pp. 573–92. Hutchinson D. and Pennachi G., Measuring rents and interest rate risk in imperfect ﬁnancial markets: the case of retail bank deposit, Journal of Financial and Quantitative Analysis, vol. 31, 1996, pp. 399–417. Mardia K. V., Kent J. T., and Bibby J. M., Multivariate Analysis, Academic Press, 1979. Sanyal A., A Continuous Time Monte Carlo Implementation of the Hull and White One Factor Model and the Pricing of Core Deposit, unpublished manuscript, December 1997. Selvaggio R., Using the OAS methodology to value and hedge commercial bank retail demand deposit premiums, The Handbook of Asset/Liability Management, Edited by F. J. Fabozzi & A. Konishi, McGraw-Hill, 1996. Smithson C., A Lego approach to ﬁnancial engineering in the Handbook of Currency and Interest Rate Risk Management, Edited by R. Schwartz & C. W. Smith Jr., New York Institute of Finance, 1990. Tatsuoka M. M., Multivariate Analysis, John Wiley & Sons, Ltd, 1971. Wilson T., Optimal value: portfolio theory, Balance Sheet, Vol. 3, No. 3, Autumn 1994.

APPENDIX 1 Bair J., Math´ematiques g´en´erales, De Boeck, 1990. Esch L., Math´ematique pour e´ conomistes et gestionnaires (2nd Edition), De Boeck, 1999. Guerrien B., Alg`ebre lin´eaire pour e´ conomistes, Economica, 1982. Ortega J. M., Matrix Theory, Plenum, 1987. Weber J. E., Mathematical Analysis (Business and Economic Applications), Harper and Row, 1982.

APPENDIX 2 Baxter M. and Rennie A., Financial Calculus, Cambridge University Press, 1996.

388

Bibliography

Feller W., An Introduction to Probability Theory and its Applications (2 volumes), John Wiley & Sons, Inc, 1968. Grimmett G. and Stirzaker D., Probability and Random Processes, Oxford University Press, 1992. Kendall M. and Stuart A., The Advanced Theory of Statistics (3 volumes), Grifﬁn, 1977. Loeve M., Probability Theory (2 volumes), Springer-Verlag, 1977. Roger P., Les outils de la mod´elisation ﬁnanci`ere, Presses Universitaires de France, 1991. Ross S. M., Initiation aux probabiliti´es, Presses Polytechniques et Universitaires Romandes, 1994.

APPENDIX 3 Ansion G., Econom´etrie pour l’enterprise, Eyrolles, 1988. Dagnelie P., Th´eorie et m´ethodes statistique (2 volumes), Presses Agronomiques de Gembloux, 1975. Johnston J., Econometric Methods, McGraw-Hill, 1972. Justens D., Statistique pour d´ecideurs, De Boeck, 1988. Kendall M. and Stuart A., The Advanced Theory of Statistics (3 volumes), Grifﬁn, 1977.

APPENDIX 4 Fisher R. A. and Tippett L. H. C., Limiting forms of the frequency distribution of the largest or smallest member of a sample, Proceedings of the Cambridge Philosophical Society, Vol. 24, 1928, pp. 180–90. Gnedenko B. V., On the limit distribution for the maximum term of a random series, Annals of Mathematics, Vol. 44, 1943, pp. 423–53. Jenkinson A. F., The frequency distribution of the annual maximum (or minimum) values of meteorological elements, Quarterly Journal of the Royal Meteorological Society, Vol. 87, 1955, pp. 145–58.

APPENDIX 5 Chatﬁeld C. and Collins A. J., Introduction to Multivariate Analysis, Chapman & Hall, 1980. Saporta G., Probabilit´es, Analyse des Donn´ees et Statistique, Technip, 1990.

APPENDIX 6 Chatﬁeld C. and Collins A. J., Introduction to Multivariate Analysis, Chapman & Hall, 1980. Saporta G., Probabilit´es, Analyse des Donn´ees et Statistique, Technip, 1990.

APPENDIX 7 Droesbeke J. J., Fichet B., and Tassi P, Mod´elisation ARCH, th´erine statistique et applications dans ´ le domaine de la ﬁnance, Editions ULB, 1994. Engel R. F., Auto-regressive conditional heteroscedasticity with estimate of the variance of United Kingdom inﬂation, Econometrica, No. 50, 1982, pp. 987–1003. Gourieroux C., Mod`eles ARCH et applications ﬁnanci`eres, Economica, 1992. Nelson D. B., Conditional heteroscedasticity in asset returns: a new approach, Econometrica, No. 39, 1991, pp. 347–70.

APPENDIX 8 Burden R. L. and Faires D. J., Numerical Analysis, Prindle, Weber & Schmidt, 1981. Litt F. X., Analyse num´erique, premi`ere partie, ULG, 1999. Nougier J-P., M´ethods de calcul num´erique, Masson, 1993.

INTERNET SITES http://www.aptltd.com http://www.bis.org/index.htm http://www.cga-canada.org/fr/magazine/nov-dec02/Cyberguide f.htm http://www.fasb.org http://www.iasc.org.uk/cmt/0001.asp http://www.ifac.org http://www.prim.lu

Index absolute global risk 285 absolute risk aversion coefﬁcient 88 accounting standards 9–10 accrued interest 118–19 actuarial output rate on issue 116–17 actuarial return rate at given moment 117 adjustment tests 361 Aitken extrapolation 376 Akaike’s information criterion (AIC) 319 allocation independent allocation 288 joint allocation 289 of performance level 289–90 of systematic risk 288–9 American option 149 American pull 158–9 arbitrage 31 arbitrage models 138–9 with state variable 139–42 arbitrage pricing theory (APT) 97–8, 99 absolute global risk 285 analysis of style 291–2 beta 290, 291 factor-sensitivity proﬁle 285 model 256, 285–94 relative global risk/tracking error 285–7 ARCH 320 ARCH-GARCH models 373 arithmetical mean 36–7 ARMA models 318–20 asset allocation 104, 274 asset liability management replicating portfolios 311–21 repricing schedules 301–11 simulations 300–1 structural risk analysis in 295–9 VaR in 301 autocorrelation test 46

autoregressive integrated moving average 320 autoregressive moving average (ARMA) 318 average deviation 41 bank offered rate (BOR) 305 basis point 127 Basle Committee for Banking Controls 4 Basle Committee on Banking Supervision 3–9 Basle II 5–9 Bayesian information criterion (BIC) 319 bear money spread 177 benchmark abacus 287–8 Bernouilli scheme 350 Best Linear Unbiased Estimators (BLUE) 363 beta APT 290, 291 portfolio 92 bijection 335 binomial distribution 350–1 binomial formula (Newton’s) 111, 351 binomial law of probability 165 binomial trees 110, 174 binomial trellis for underlying equity 162 bisection method 380 Black and Scholes model 33, 155, 174, 226, 228, 239 for call option 169 dividends and 173 for options on equities 168–73 sensitivity parameters 172–3 BLUE (Best Linear Unbiased Estimators) 363 bond portfolio management strategies 135–8 active strategy 137–8 duration and convexity of portfolio 135–6 immunizing a portfolio 136–7 positive strategy: immunisation 135–7 bonds average instant return on 140

390

Index

bonds (continued ) deﬁnition 115–16 ﬁnancial risk and 120–9 price 115 price approximation 126 return on 116–19 sources of risk 119–21 valuing 119 bootstrap method 233 Brennan and Schwarz model 139 building approach 316 bull money spread 177 business continuity plan (BCP) 14 insurance and 15–16 operational risk and 16 origin, deﬁnition and objective 14 butterﬂy money spread 177 calendar spread 177 call-associated bonds 120 call option 149, 151, 152 intrinsic value 153 premium breakdown 154 call–put parity relation 166 for European options 157–8 canonical analysis 369 canonical correlation analysis 307–9, 369–70 capital asset pricing model (CAPM or MEDAF) 93–8 equation 95–7, 100, 107, 181 cash 18 catastrophe scenarios 20, 32, 184, 227 Cauchy’s law 367 central limit theorem (CLT) 41, 183, 223, 348–9 Charisma 224 Chase Manhattan 224, 228 Choleski decomposition method 239 Choleski factorisation 220, 222, 336–7 chooser option 176 chord method 377–8 classic chord method 378 clean price 118 collateral management 18–19 compliance 24 compliance tests 361 compound Poisson process 355 conditional normality 203 conﬁdence coefﬁcient 360 conﬁdence interval 360–1 continuous models 30, 108–9, 111–13, 131–2, 134 continuous random variables 341–2 contract-by-contract 314–16

convergence 375–6 convertible bonds 116 convexity 33, 149, 181 of a bond 127–9 corner portfolio 64 correlation 41–2, 346–7 counterparty 23 coupon (nominal) rate 116 coupons 115 covariance 41–2, 346–7 cover law of probability 164 Cox, Ingersoll and Ross model 139, 145–7, 174 Cox, Ross and Rubinstein binomial model 162–8 dividends and 168 one period 163–4 T periods 165–6 two periods 164–5 credit risk 12, 259 critical line algorithm 68–9 debentures 18 decision channels 104, 105 default risk 120 deﬁcit constraint 90 degenerate random variable 341 delta 156, 181, 183 delta hedging 157, 172 derivatives 325–7 calculations 325–6 deﬁnition 325 extrema 326–7 geometric interpretations 325 determinist models 108–9 generalisation 109 stochastic model and 134–5 deterministic structure of interest rates 129–35 development models 30 diagonal model 70 direct costs 26 dirty price 118 discrete models 30, 108, 109–11. 130, 132–4 discrete random variables 340–1 dispersion index 26 distortion models 138 dividend discount model 104, 107–8 duration 33, 122–7, 149 and characteristics of a bond 124 deﬁnition 121 extension of concept of 148 interpretations 121–3 of equity funds 299 of speciﬁc bonds 123–4

Index dynamic interest-rate structure 132–4 dynamic models 30 dynamic spread 303–4 efﬁciency, concept of 45 efﬁcient frontier 27, 54, 59, 60 for model with risk-free security 78–9 for reformulated problem 62 for restricted Markowitz model 68 for Sharpe’s simple index model 73 unrestricted and restricted 68 efﬁcient portfolio 53, 54 EGARCH models 320, 373 elasticity, concept of 123 Elton, Gruber and Padberg method 79–85, 265, 269–74 adapting to VaR 270–1 cf VaR 271–4 maximising risk premium 269–70 equities deﬁnition 35 market efﬁciency 44–8 market return 39–40 portfolio risk 42–3 return on 35–8 return on a portfolio 38–9 security risk within a portfolio 43–4 equity capital adequacy ratio 4 equity dynamic models 108–13 equity portfolio diversiﬁcation 51–93 model with risk-free security 75–9 portfolio size and 55–6 principles 515 equity portfolio management strategies 103–8 equity portfolio theory 183 equity valuation models 48–51 equivalence, principle of 117 ergodic estimator 40, 42 estimated variance–covariance matrix method (VC) 201, 202–16, 275, 276, 278 breakdown of ﬁnancial assets 203–5 calculating VaR 209–16 hypotheses and limitations 235–7 installation and use 239–41 mapping cashﬂows with standard maturity dates 205–9 valuation models 237–9 estimator for mean of the population 360 European call 158–9 European option 149 event-based risks 32, 184 ex ante rate 117 ex ante tracking error 285, 287 ex post return rate 121 exchange options 174–5 exchange positions 204

391

exchange risk 12 exercise price of option 149 expected return 40 expected return risk 41, 43 expected value 26 exponential smoothing 318 extrema 326–7, 329–31 extreme value theory 230–4, 365–7 asymptotic results 365–7 attraction domains 366–7 calculation of VaR 233–4 exact result 365 extreme value theorem 230–1 generalisation 367 parameter estimation by regression 231–2 parameter estimation using the semi-parametric method 233, 234 factor-8 mimicking portfolio 290 factor-mimicking portfolios 290 factorial analysis 98 fair value 10 fat tail distribution 231 festoon effect 118, 119 ﬁnal prediction error (FPE) 319 Financial Accounting Standards Board (FASB) 9 ﬁnancial asset evaluation line 107 ﬁrst derivative 325 Fisher’s skewness coefﬁcient 345–6 ﬁxed-income securities 204 ﬁxed-rate bonds 115 ﬁxed rates 301 ﬂoating-rate contracts 301 ﬂoating-rate integration method 311 FRAs 276 Fr´echet’s law 366, 367 frequency 253 fundamental analysis 45 gamma 156, 173, 181, 183 gap 296–7, 298 GARCH models 203, 320 Garman–Kohlhagen formula 175 Gauss-Seidel method, nonlinear 381 generalised error distribution 353 generalised Pareto distribution 231 geometric Brownian motion 112, 174, 218, 237, 356 geometric mean 36 geometric series 123, 210, 328–9 global portfolio optimisation via VaR 274–83 generalisation of asset model 275–7 construction of optimal global portfolio 277–8 method 278–83

392

Index

good practices 6 Gordon – Shapiro formula 48–50, 107, 149 government bonds 18 Greeks 155–7, 172, 181 gross performance level and risk withdrawal 290–1 Gumbel’s law 366, 367

models for bonds 149 static structure of 130–2 internal audit vs. risk management 22–3 internal notation (IN) 4 intrinsic value of option 153 Itˆo formula (Ito lemma) 140, 169, 357 Itˆo process 112, 356

Heath, Jarrow and Morton model 138, 302 hedging formula 172 Hessian matrix 330 high leverage effect 257 Hill’s estimator 233 historical simulation 201, 224–34, 265 basic methodology 224–30 calculations 239 data 238–9 extreme value theory 230–4 hypotheses and limitations 235–7 installation and use 239–41 isolated asset case 224–5 portfolio case 225–6 risk factor case 224 synthesis 226–30 valuation models 237–8 historical volatility 155 histories 199 Ho and Lee model 138 homogeneity tests 361 Hull and White model 302, 303 hypothesis test 361–2

Jensen index 102–3 Johnson distributions 215 joint allocation 289 joint distribution function 342

IAS standards 10 IASB (International Accounting Standards Board) 9 IFAC (International Federation of Accountants) 9 immunisation of bonds 124–5 implied volatility 155 in the money 153, 154 independence tests 361 independent allocation 288 independent random variables 342–3 index funds 103 indifference curves 89 indifference, relation of 86 indirect costs 26 inequalities on calls and puts 159–60 inferential statistics 359–62 estimation 360–1 sampling 359–60 sampling distribution 359–60 instant term interest rate 131 integrated risk management 22, 24–5 interest rate curves 129

kappa see vega kurtosis coefﬁcient

182, 189, 345–6

Lagrangian function 56, 57, 61, 63, 267, 331 for risk-free security model 76 for Sharpe’s simple index model 71 Lagrangian multipliers 57, 331 law of large numbers 223, 224, 344 law of probability 339 least square method 363 legal risk 11, 21, 23–4 Lego approach 316 leptokurtic distribution 41, 182, 183, 189, 218, 345 linear equation system 335–6 linear model 32, 33, 184 linearity condition 202, 203 Lipschitz’s condition 375–6 liquidity bed 316 liquidity crisis 17 liquidity preference 316 liquidity risk 12, 16, 18, 296–7 logarithmic return 37 logistic regression 309–10, 371 log-normal distribution 349–50 log-normal law with parameter 349 long (short) straddle 176 loss distribution approach 13 lottery bonds 116 MacLaurin development 275, 276 mapping cashﬂows 205–9 according to RiskMetricsT M 206–7 alternative 207–8 elementary 205–6 marginal utility 87 market efﬁciency 44–8 market model 91–3 market price of the risk 141 market risk 12 market straight line 94

Index market timing 104–7 Markowitz’s portfolio theory 30, 41, 43, 56–69, 93, 94, 182 ﬁrst formulation 56–60 reformulating the problem 60–9 mathematic valuation models 199 matrix algebra 239 calculus 332–7 diagonal 333 n-order 332 operations 333–4 symmetrical 332–3, 334–5 maturity price of bond 115 maximum outﬂow 17–18 mean 343–4 mean variance 27, 265 for equities 149 measurement theory 344 media risk 12 Merton model 139, 141–2 minimum equity capital requirements 4 modern portfolio theory (MPT) 265 modiﬁed duration 121 money spread 177 monoperiodic models 30 Monte Carlo simulation 201, 216–23, 265, 303 calculations 239 data 238–9 estimation method 218–23 hypotheses and limitations 235–7 installation and use 239–41 probability theory and 216–18 synthesis 221–3 valuation models 237–8 multi-index models 221, 266 multi-normal distribution 349 multivariate random variables 342–3 mutual support 147–9 Nelson and Schaefer model 139 net present value (NPV) 298–9, 302–3 neutral risk 164, 174 New Agreement 4, 5 Newson–Raphson nonlinear iterative method 309, 379–80, 381 Newton’s binomial formula 111, 351 nominal rate of a bond 115, 116 nominal value of a bond 115 non-correlation 347 nonlinear equation systems 380–1 ﬁrst-order methods 377–9 iterative methods 375–7 n-dimensional iteration 381 principal methods 381

393

solving 375–81 nonlinear Gauss-Seidel method 381 nonlinear models independent of time 33 nonlinear regression 234 non-quantiﬁable risks 12–13 normal distribution 41, 183, 188–90, 237, 254, 347–8 normal law 188 normal probability law 183 normality 202, 203, 252–4 observed distribution 254 operational risk 12–14 business continuity plan (BCP) and 16 deﬁnition 6 management 12–13 philosophy of 5–9 triptych 14 options complex 175–7 deﬁnition 149 on bonds 174 sensitivity parameters 155–7 simple 175 strategies on 175–7 uses 150–2 value of 153–60 order of convergence 376 Ornstein – Uhlenbeck process 142–5, 356 OTC derivatives market 18 out of the money 153, 154 outliers 241 Pareto distribution 189, 367 Parsen CAT 319 partial derivatives 329–31 payment and settlement systems 18 Pearson distribution system 183 perfect market 31, 44 performance evaluation 99–108 perpetual bond 123–4 Picard’s iteration 268, 271, 274, 280, 375, 376, 381 pip 247 pockets of inefﬁciency 47 Poisson distribution 350 Poisson process 354–5 Poisson’s law 351 portfolio beta 92 portfolio risk management investment strategy 258 method 257–64 risk framework 258–64 power of the test 362 precautionary surveillance 3, 4–5 preference, relation of 86

394

Index

premium 149 price at issue 115 price-earning ratio 50–1 price of a bond 127 price variation risk 12 probability theory 216–18 process risk 24 product risk 23 pseudo-random numbers 217 put option 149, 152 quadratic form 334–7 qualitative approach 13 quantiﬁable risks 12, 13 quantile 188, 339–40 quantitative approach 13 Ramaswamy and Sundaresan model 139 random aspect of ﬁnancial assets 30 random numbers 217 random variables 339–47 random walk 45, 111, 203, 355 statistical tests for 46 range forwards 177 rate ﬂuctuation risk 120 rate mismatches 297–8 rate risk 12, 303–11 redemption price of bond 115 regression line 363 regressions 318, 362–4 multiple 363–4 nonlinear 364 simple 362–3 regular falsi method 378–9 relative fund risk 287–8 relative global risk 285–7 relative risks 43 replicating portfolios 302, 303, 311–21 with optimal value method 316–21 repos market 18 repricing schedules 301–11 residual risk 285 restricted Markowitz model 63–5 rho 157, 173, 183 Richard model 139 risk, attitude towards 87–9 risk aversion 87, 88 risk factors 31, 184 risk-free security 75–9 risk, generalising concept 184 risk indicators 8 risk management cost of 25–6 environment 7

function, purpose of 11 methodology 19–21 vs back ofﬁce 22 risk mapping 8 risk measurement 8, 41 risk-neutral probability 162, 164 risk neutrality 87 risk of one equity 41 risk of realisation 120 risk of reinvestment 120 risk of reputation 21 risk per share 181–4 risk premium 88 risk return 26–7 risk transfer 14 risk typology 12–19 Risk$TM 224, 228 RiskMetricsTM 202, 203, 206–7, 235, 236, 238, 239–40 scenarios and stress testing 20 Schaefer and Schwartz model 139 Schwarz criterion 319 scope of competence 21 scorecards method 7, 13 security 63–5 security market line 107 self-assessment 7 semi-form of efﬁciency hypothesis 46 semi-parametric method 233 semi-variance 41 sensitivity coefﬁcient 121 separation theorem 94–5, 106 series 328 Sharpe’s multi-index model 74–5 Sharpe’s simple index method 69–75, 100–1, 132, 191, 213, 265–9 adapting critical line algorithm to VaR 267–8 cf VaR 269 for equities 221 problem of minimisation 266–7 VaR in 266–9 short sale 59 short-term interest rate 130 sign test 46 simulation tests for technical analysis methods 46 simulations 300–1 skewed distribution 182 skewness coefﬁcient 182, 345–6 speciﬁc risk 91, 285 speculation bubbles 47 spot 247

Index spot price 150 spot rate 129, 130 spreads 176–7 square root process 145 St Petersburg paradox 85 standard Brownian motion 33, 355 standard deviation 41, 344–5 standard maturity dates 205–9 standard normal law 348 static models 30 static spread 303–4 stationarity condition 202, 203, 236 stationary point 327, 330 stationary random model 33 stochastic bond dynamic models 138–48 stochastic differential 356–7 stochastic duration 121, 147–8 random evolution of rates 147 stochastic integral 356–7 stochastic models 109–13 stochastic process 33, 353–7 particular 354–6 path of 354 stock exchange indexes 39 stock picking 104, 275 stop criteria 376–7 stop loss 258–9 straddles 175, 176 strangles 175, 176 strategic risk 21 stress testing 20, 21, 223 strike 149 strike price 150 strong form of efﬁciency hypothesis 46–7 Student distribution 189, 235, 351–2 Student’s law 367 Supervisors, role of 8 survival period 17–18 systematic inefﬁciency 47 systematic risk 44, 91, 285 allocation of 288–9 tail parameter 231 taste for risk 87 Taylor development 33, 125, 214, 216, 275–6 Taylor formula 37, 126, 132, 327–8, 331 technical analysis 45 temporal aspect of ﬁnancial assets 30 term interest rate 129, 130 theorem of expected utility 86 theoretical reasoning 218 theta 156, 173, 183 three-equity portfolio 54

395

time value of option 153, 154 total risk 43 tracking errors 103, 285–7 transaction risk 23–4 transition bonds 116 trend extrapolations 318 Treynor index 102 two-equity portfolio 51–4

unbiased estimator 360 underlying equity 149 uniform distribution 352 uniform random variable 217 utility function 85–7 utility of return 85 utility theory 85–90, 183

valuation models 30, 31–3, 160–75, 184 value at risk (VaR) 13, 20–1 based on density function 186 based on distribution function 185 bond portfolio case 250–2 breaking down 193–5 calculating 209–16 calculations 244–52 component 195 components of 195 deﬁnition 195–6 estimation 199–200 for a portfolio 190–7 for a portfolio of linear values 211–13 for a portfolio of nonlinear values 214–16 for an isolated asset 185–90 for equities 213–14 heading investment 196–7 incremental 195–7 individual 194 link to Sharp index 197 marginal 194–5 maximum, for portfolio 263–4 normal distribution 188–90 Treasury portfolio case 244–9 typology 200–2 value of basis point (VBP) 19–20, 21, 127, 245–7, 260–3 variable contracts 301 variable interest rates 300–1 variable rate bonds 115 variance 41, 344–5 variance of expected returns approach 183 variance – covariance matrix 336 Vasicek model 139, 142–4, 174

396

Index

vega (kappa) 156, 173 volatility of option 154–5

yield curve 129 yield to maturity (YTM) 250

weak form of the efﬁciency hypothesis 46 Weibull’s law 366, 367 Wiener process 355

zero-coupon bond 115, 123, 129 zero-coupon rates, analysis of correlations on 305–7

Index compiled by Annette Musker

For other titles in the Wiley Finance Series please see www.wiley.com/ﬁnance

Asset and Risk Management Risk Oriented Finance

Louis Esch, Robert Kieffer and Thierry Lopez C. Berb´e, P. Damel, M. Debay, J.-F. Hannosset

Published by

John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England Telephone (+44) 1243 779777

Copyright 2005 De Boeck & Larcier s.a. Editions De Boeck Universit´e Rue des Minimes 39, B-1000 Brussels First printed in French by De Boeck & Larcier s.a. – ISBN: 2-8041-3309-5 Email (for orders and customer service enquiries): [email protected] Visit our Home Page on www.wileyeurope.com or www.wiley.com All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except under the terms of the Copyright, Designs and Patents Act 1988 or under the terms of a licence issued by the Copyright Licensing Agency Ltd, 90 Tottenham Court Road, London W1T 4LP, UK, without the permission in writing of the Publisher. Requests to the Publisher should be addressed to the Permissions Department, John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England, or emailed to [email protected], or faxed to (+44) 1243 770620. Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The Publisher is not associated with any product or vendor mentioned in this book. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the Publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought. Other Wiley Editorial Ofﬁces John Wiley & Sons Inc., 111 River Street, Hoboken, NJ 07030, USA Jossey-Bass, 989 Market Street, San Francisco, CA 94103-1741, USA Wiley-VCH Verlag GmbH, Boschstr. 12, D-69469 Weinheim, Germany John Wiley & Sons Australia Ltd, 33 Park Road, Milton, Queensland 4064, Australia John Wiley & Sons (Asia) Pte Ltd, 2 Clementi Loop #02-01, Jin Xing Distripark, Singapore 129809 John Wiley & Sons Canada Ltd, 22 Worcester Road, Etobicoke, Ontario, Canada M9W 1L1 Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books. Library of Congress Cataloging-in-Publication Data Esch, Louis. Asset and risk management : risk oriented ﬁnance / Louis Esch, Robert Kieffer, and Thierry Lopez. p. cm. Includes bibliographical references and index. ISBN 0-471-49144-6 (cloth : alk. paper) 1. Investment analysis. 2. Asset-liability management. 3. Risk management. I. Kieffer, Robert. II. Lopez, Thierry. III. Title. HG4529.E83 2005 332.63 2042—dc22 2004018708 British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library ISBN 0-471-49144-6 Typeset in 10/12pt Times by Laserwords Private Limited, Chennai, India Printed and bound in Great Britain by Antony Rowe Ltd, Chippenham, Wiltshire This book is printed on acid-free paper responsibly manufactured from sustainable forestry in which at least two trees are planted for each one used for paper production.

Contents Collaborators Foreword by Philippe Jorion

xiii xv

Acknowledgements

xvii

Introduction Areas covered Who is this book for?

xix xix xxi

PART I THE MASSIVE CHANGES IN THE WORLD OF FINANCE Introduction 1 The Regulatory Context 1.1 Precautionary surveillance 1.2 The Basle Committee 1.2.1 General information 1.2.2 Basle II and the philosophy of operational risk 1.3 Accounting standards 1.3.1 Standard-setting organisations 1.3.2 The IASB 2 Changes in Financial Risk Management 2.1 Deﬁnitions 2.1.1 Typology of risks 2.1.2 Risk management methodology 2.2 Changes in ﬁnancial risk management 2.2.1 Towards an integrated risk management 2.2.2 The ‘cost’ of risk management 2.3 A new risk-return world 2.3.1 Towards a minimisation of risk for an anticipated return 2.3.2 Theoretical formalisation

1 2 3 3 3 3 5 9 9 9 11 11 11 19 21 21 25 26 26 26

vi

Contents

PART II EVALUATING FINANCIAL ASSETS Introduction 3

4

29 30

Equities 3.1 The basics 3.1.1 Return and risk 3.1.2 Market efﬁciency 3.1.3 Equity valuation models 3.2 Portfolio diversiﬁcation and management 3.2.1 Principles of diversiﬁcation 3.2.2 Diversiﬁcation and portfolio size 3.2.3 Markowitz model and critical line algorithm 3.2.4 Sharpe’s simple index model 3.2.5 Model with risk-free security 3.2.6 The Elton, Gruber and Padberg method of portfolio management 3.2.7 Utility theory and optimal portfolio selection 3.2.8 The market model 3.3 Model of ﬁnancial asset equilibrium and applications 3.3.1 Capital asset pricing model 3.3.2 Arbitrage pricing theory 3.3.3 Performance evaluation 3.3.4 Equity portfolio management strategies 3.4 Equity dynamic models 3.4.1 Deterministic models 3.4.2 Stochastic models

35 35 35 44 48 51 51 55 56 69 75 79 85 91 93 93 97 99 103 108 108 109

Bonds 4.1 Characteristics and valuation 4.1.1 Deﬁnitions 4.1.2 Return on bonds 4.1.3 Valuing a bond 4.2 Bonds and ﬁnancial risk 4.2.1 Sources of risk 4.2.2 Duration 4.2.3 Convexity 4.3 Deterministic structure of interest rates 4.3.1 Yield curves 4.3.2 Static interest rate structure 4.3.3 Dynamic interest rate structure 4.3.4 Deterministic model and stochastic model 4.4 Bond portfolio management strategies 4.4.1 Passive strategy: immunisation 4.4.2 Active strategy 4.5 Stochastic bond dynamic models 4.5.1 Arbitrage models with one state variable 4.5.2 The Vasicek model

115 115 115 116 119 119 119 121 127 129 129 130 132 134 135 135 137 138 139 142

Contents

4.5.3 The Cox, Ingersoll and Ross model 4.5.4 Stochastic duration 5 Options 5.1 Deﬁnitions 5.1.1 Characteristics 5.1.2 Use 5.2 Value of an option 5.2.1 Intrinsic value and time value 5.2.2 Volatility 5.2.3 Sensitivity parameters 5.2.4 General properties 5.3 Valuation models 5.3.1 Binomial model for equity options 5.3.2 Black and Scholes model for equity options 5.3.3 Other models of valuation 5.4 Strategies on options 5.4.1 Simple strategies 5.4.2 More complex strategies PART III GENERAL THEORY OF VaR Introduction

vii

145 147 149 149 149 150 153 153 154 155 157 160 162 168 174 175 175 175 179 180

6 Theory of VaR 6.1 The concept of ‘risk per share’ 6.1.1 Standard measurement of risk linked to ﬁnancial products 6.1.2 Problems with these approaches to risk 6.1.3 Generalising the concept of ‘risk’ 6.2 VaR for a single asset 6.2.1 Value at Risk 6.2.2 Case of a normal distribution 6.3 VaR for a portfolio 6.3.1 General results 6.3.2 Components of the VaR of a portfolio 6.3.3 Incremental VaR

181 181 181 181 184 185 185 188 190 190 193 195

7 VaR Estimation Techniques 7.1 General questions in estimating VaR 7.1.1 The problem of estimation 7.1.2 Typology of estimation methods 7.2 Estimated variance–covariance matrix method 7.2.1 Identifying cash ﬂows in ﬁnancial assets 7.2.2 Mapping cashﬂows with standard maturity dates 7.2.3 Calculating VaR 7.3 Monte Carlo simulation 7.3.1 The Monte Carlo method and probability theory 7.3.2 Estimation method

199 199 199 200 202 203 205 209 216 216 218

viii

Contents

7.4 Historical simulation 7.4.1 Basic methodology 7.4.2 The contribution of extreme value theory 7.5 Advantages and drawbacks 7.5.1 The theoretical viewpoint 7.5.2 The practical viewpoint 7.5.3 Synthesis 8

Setting Up a VaR Methodology 8.1 Putting together the database 8.1.1 Which data should be chosen? 8.1.2 The data in the example 8.2 Calculations 8.2.1 Treasury portfolio case 8.2.2 Bond portfolio case 8.3 The normality hypothesis

PART IV FROM RISK MANAGEMENT TO ASSET MANAGEMENT Introduction 9

224 224 230 234 235 238 241 243 243 243 244 244 244 250 252 255 256

Portfolio Risk Management 9.1 General principles 9.2 Portfolio risk management method 9.2.1 Investment strategy 9.2.2 Risk framework

257 257 257 258 258

10

Optimising the Global Portfolio via VaR 10.1 Taking account of VaR in Sharpe’s simple index method 10.1.1 The problem of minimisation 10.1.2 Adapting the critical line algorithm to VaR 10.1.3 Comparison of the two methods 10.2 Taking account of VaR in the EGP method 10.2.1 Maximising the risk premium 10.2.2 Adapting the EGP method algorithm to VaR 10.2.3 Comparison of the two methods 10.2.4 Conclusion 10.3 Optimising a global portfolio via VaR 10.3.1 Generalisation of the asset model 10.3.2 Construction of an optimal global portfolio 10.3.3 Method of optimisation of global portfolio

265 266 266 267 269 269 269 270 271 272 274 275 277 278

11

Institutional Management: APT Applied to Investment Funds 11.1 Absolute global risk 11.2 Relative global risk/tracking error 11.3 Relative fund risk vs. benchmark abacus 11.4 Allocation of systematic risk

285 285 285 287 288

Contents

11.4.1 Independent allocation 11.4.2 Joint allocation: ‘value’ and ‘growth’ example 11.5 Allocation of performance level 11.6 Gross performance level and risk withdrawal 11.7 Analysis of style PART V FROM RISK MANAGEMENT TO ASSET AND LIABILITY MANAGEMENT Introduction

ix

288 289 289 290 291

293 294

12 Techniques for Measuring Structural Risks in Balance Sheets 12.1 Tools for structural risk analysis in asset and liability management 12.1.1 Gap or liquidity risk 12.1.2 Rate mismatches 12.1.3 Net present value (NPV) of equity funds and sensitivity 12.1.4 Duration of equity funds 12.2 Simulations 12.3 Using VaR in ALM 12.4 Repricing schedules (modelling of contracts with ﬂoating rates) 12.4.1 The conventions method 12.4.2 The theoretical approach to the interest rate risk on ﬂoating rate products, through the net current value 12.4.3 The behavioural study of rate revisions 12.5 Replicating portfolios 12.5.1 Presentation of replicating portfolios 12.5.2 Replicating portfolios constructed according to convention 12.5.3 The contract-by-contract replicating portfolio 12.5.4 Replicating portfolios with the optimal value method

295 295 296 297 298 299 300 301 301 301 302 303 311 312 313 314 316

APPENDICES

323

Appendix 1 Mathematical Concepts 1.1 Functions of one variable 1.1.1 Derivatives 1.1.2 Taylor’s formula 1.1.3 Geometric series 1.2 Functions of several variables 1.2.1 Partial derivatives 1.2.2 Taylor’s formula 1.3 Matrix calculus 1.3.1 Deﬁnitions 1.3.2 Quadratic forms

325 325 325 327 328 329 329 331 332 332 334

Appendix 2 Probabilistic Concepts 2.1 Random variables 2.1.1 Random variables and probability law 2.1.2 Typical values of random variables

339 339 339 343

x

Contents

2.2 Theoretical distributions 2.2.1 Normal distribution and associated ones 2.2.2 Other theoretical distributions 2.3 Stochastic processes 2.3.1 General considerations 2.3.2 Particular stochastic processes 2.3.3 Stochastic differential equations

347 347 350 353 353 354 356

Appendix 3 Statistical Concepts 3.1 Inferential statistics 3.1.1 Sampling 3.1.2 Two problems of inferential statistics 3.2 Regressions 3.2.1 Simple regression 3.2.2 Multiple regression 3.2.3 Nonlinear regression

359 359 359 360 362 362 363 364

Appendix 4 Extreme Value Theory 4.1 Exact result 4.2 Asymptotic results 4.2.1 Extreme value theorem 4.2.2 Attraction domains 4.2.3 Generalisation

365 365 365 365 366 367

Appendix 5 Canonical Correlations 5.1 Geometric presentation of the method 5.2 Search for canonical characters

369 369 369

Appendix 6

371

Algebraic Presentation of Logistic Regression

Appendix 7 Time Series Models: ARCH-GARCH and EGARCH 7.1 ARCH-GARCH models 7.2 EGARCH models

373 373 373

Appendix 8 Numerical Methods for Solving Nonlinear Equations 8.1 General principles for iterative methods 8.1.1 Convergence 8.1.2 Order of convergence 8.1.3 Stop criteria 8.2 Principal methods 8.2.1 First order methods 8.2.2 Newton–Raphson method 8.2.3 Bisection method

375 375 375 376 376 377 377 379 380

Contents

8.3 Nonlinear equation systems 8.3.1 General theory of n-dimensional iteration 8.3.2 Principal methods

xi

380 381 381

Bibliography

383

Index

389

Collaborators

Christian Berb´e, Civil engineer from Universit´e libre de Bruxelles and ABAF ﬁnancial analyst. Previously a director at PricewaterhouseCoopers Consulting in Luxembourg, he is a ﬁnancial risk management specialist currently working as a wealth manager with Bearbull (Degroof Group). Pascal Damel, Doctor of management science from the University of Nancy, is conference master for management science at the IUT of Metz, an independent risk management consultant and ALM. Michel Debay, Civil engineer and physicist of the University of Li`ege and master of ﬁnance and insurance at the High Business School in Li`ege (HEC), currently heads the Data Warehouse Unit at SA Kredietbank in Luxembourg. Jean-Fran¸cois Hannosset, Actuary of the Catholic University of Louvain, currently manages the insurance department at Banque Degroof Luxembourg SA, and is director of courses at the Luxembourg Institute of Banking Training.

Foreword by Philippe Jorion

Risk management has truly undergone a revolution in the last decade. It was just over 10 years ago, in July 1993, that the Group of 30 (G-30) ofﬁcially promulgated best practices for the management of derivatives.1 Even though the G-30 issued its report in response to the string of derivatives disasters of the early 1990s, these best practices apply to all ﬁnancial instruments, not only derivatives. This was the ﬁrst time the term ‘Value-at-Risk’ (VaR) was publicly and widely mentioned. By now, VaR has become the standard benchmark for measuring ﬁnancial risk. All major banks dutifully report their VaR in quarterly or annual ﬁnancial reports. Modern risk measurement methods are not new, however. They go back to the concept of portfolio risk developed by Harry Markowitz in 1952. Markowitz noted that investors should be interested in total portfolio risk and that ‘diversiﬁcation is both observed and sensible’. He provided tools for portfolio selection. The new aspect of the VaR revolution is the application of consistent methods to measure market risk across the whole institution or portfolio, across products and business lines. These methods are now being extended to credit risk, operational risk, and to the ﬁnal frontier of enterprise-wide risk. Still, risk measurement is too often limited to a passive approach, which is to measure or to control. Modern risk-measurement techniques are much more useful than that. They can be used to manage the portfolio. Consider a portfolio manager with a myriad of securities to select from. The manager should have strong opinions on most securities. Opinions, or expected returns on individual securities, aggregate linearly into the portfolio expected return. So, assessing the effect of adding or subtracting securities on the portfolio expected return is intuitive. Risk, however, does not aggregate in a linear fashion. It depends on the number of securities, on individual volatilities and on all correlations. Risk-measurement methods provide tools such as marginal VaR, component VaR, and incremental VaR, that help the portfolio manager to decide on the best trade-off between risk and return. Take a situation where a manager considers adding two securities to the portfolio. Both have the same expected return. The ﬁrst, however, has negative marginal VaR; the second has positive marginal VaR. In other words, the addition of the ﬁrst security will reduce the 1 The G-30 is a private, nonproﬁt association, founded in 1978 and consisting of senior representatives of the private and public sectors and academia. Its main purpose is to affect the policy debate on international economic and ﬁnancial issues. The G-30 regularly publishes papers. See www.group30.org.

xvi

Foreword

portfolio risk; the second will increase the portfolio risk. Clearly, adding the ﬁrst security is the better choice. It will increase the portfolio expected return and decrease its risk. Without these tools, it is hard to imagine how to manage the portfolio. As an aside, it is often easier to convince top management of investing in risk-measurement systems when it can be demonstrated they can add value through better portfolio management. Similar choices appear at the level of the entire institution. How does a bank decide on its capital structure, that is, on the amount of equity it should hold to support its activities? Too much equity will reduce its return on equity. Too little equity will increase the likelihood of bankruptcy. The answer lies in risk-measurement methods: The amount of equity should provide a buffer adequate against all enterprise-wide risks at a high conﬁdence level. Once risks are measured, they can be decomposed and weighted against their expected proﬁts. Risks that do not generate high enough payoffs can be sold off or hedged. In the past, such trade-offs were evaluated in an ad-hoc fashion. This book provides tools for going from risk measurement to portfolio or asset management. I applaud the authors for showing how to integrate VaR-based measures in the portfolio optimisation process, in the spirit of Markowitz’s portfolio selection problem. Once risks are measured, they can be managed better. Philippe Jorion University of California at Irvine

Acknowledgements

We want to acknowledge the help received in the writing of this book. In particular, we would like to thank Michael May, managing director, Bank of Bermuda Luxembourg S.A. and Christel Glaude, Group Risk Management at KBL Group European Private Bankers.

Part I The Massive Changes in the World of Finance

Introduction 1 The Regulatory Context 2 Changes in Financial Risk Management

2

Asset and Risk Management

Introduction The ﬁnancial world of today has three main aspects: • An insurance market that is tense, mainly because of the events of 11 September 2001 and the claims that followed them. • Pressure of regulations, which are compelling the banks to quantify and reduce the risks hitherto not considered particular to banks (that is, operational risks). • A prolonged ﬁnancial crisis together with a crisis of conﬁdence, which is pressurising the ﬁnancial institutions to manage their costs ever more carefully. Against this background, the risk management function is becoming more and more important in the ﬁnance sector as a whole, increasing the scope of its skills and giving the decision-makers a contribution that is mostly strategic in nature. The most notable result of this is that the perception of cost is currently geared towards the creation of value, while as recently as ﬁve years ago, shareholders’ perceptions were too heavily weighted in the direction of the ‘cost of doing business’. It is these subjects that we propose to develop in the ﬁrst two chapters.

1 The Regulatory Context 1.1 PRECAUTIONARY SURVEILLANCE One of the aims of precautionary surveillance is to increase the quality of risk management in ﬁnancial institutions. Generally speaking: • Institutions whose market activity is signiﬁcant in terms of contribution to results or expenditure of equity fund cover need to set up a risk management function that is independent of the ‘front ofﬁce’ and ‘back ofﬁce’ functions. • When the establishment in question is a consolidating business, it must be a decisionmaking centre. The risk management function will then be responsible for suggesting a group-wide policy for the monitoring of risks. The management committee then takes the risk management policy decisions for the group as a whole. • To do this, the establishment must have adequate ﬁnancial and infrastructural resources for managing the risk. The risk management function must have systems for assessing positions and measuring risks, as well as adequate limit systems and human resources. The aim of precautionary surveillance is to: • Promote a well-thought-out and prudent business policy. • Protect the ﬁnancial stability of the businesses overseen and of the ﬁnancial sector as a whole. • Ensure that the organisation and the internal control systems are of suitable quality. • Strengthen the quality of risk management.

1.2 THE BASLE COMMITTEE We do not propose to enter into methodological details on the adequacy1 of equity capital in relation to credit, market and operational risks. On the other hand, we intend to spend some time examining the underlying philosophy of the work of the Basle Committee2 on banking controls, paying particular attention to the qualitative dynamic (see 1.2.2 below) on the matter of operational risks. 1.2.1 General information The Basle Committee on Banking Supervision is a committee of banking supervisory authorities, which was established by the central bank governors of the Group of Ten countries in 1975. It consists of senior representatives of bank supervisory authorities and central banks from Belgium, 1 Interested readers should read P. Jorion, Financial Risk Manager Handbook (Second Edition), John Wiley & Sons, Inc. 2003, and in particular its section on regulation and compliance. 2 Interested readers should consult http://www.bis.org/index.htm.

4

Asset and Risk Management Canada, France, Germany, Italy, Japan, Luxembourg, the Netherlands, Sweden, Switzerland, the United Kingdom and the United States. It usually meets at the Bank for International Settlements in Basle, where its permanent Secretariat is located.3

1.2.1.1 The current situation The aim of the capital adequacy ratio is to ensure that the establishment has sufﬁcient equity capital in relation to credit and market risks. The ratio compares the eligible equity capital with overall equity capital requirements (on a consolidated basis where necessary) and must total or exceed 100 % (or 8 % if the denominator is multiplied by 12.5). Two methods, one standard and the other based on the internal models, allow the requirements in question to be calculated. In addition, the aim of overseeing and supervising major risks is to ensure that the credit risk is suitably diversiﬁed within the banking portfolios (on a consolidated basis where necessary). 1.2.1.2 The point of the ‘New Accord’4 The Basle Committee on Banking Supervision has decided to undertake a second round of consultation on more detailed capital adequacy framework proposals that, once ﬁnalised, will replace the 1988 Accord, as amended. The new framework is intended to align capital adequacy assessment more closely with the key elements of banking risks and to provide incentives for banks to enhance their risk measurement and management capabilities.

The Committee’s ongoing work has afﬁrmed the importance of the three pillars of the new framework: 1. Minimum capital requirements. 2. Supervisory review process. 3. Market discipline. A. First aspect: minimum capital requirements The primary changes to the minimum capital requirements set out in the 1988 Accord are in the approach to credit risk and in the inclusion of explicit capital requirements for operational risk. A range of risk-sensitive options for addressing both types of risk is elaborated. For credit risk, this range begins with the standardised approach and extends to the “foundation” and “advanced” internal ratings-based (IRB) approaches. A similar structure is envisaged for operational risk. These evolutionary approaches will motivate banks to continuously improve their risk management and measurement capabilities so as to avail themselves of the more risk-sensitive methodologies and thus more accurate capital requirements.

B. Second aspect: supervisory review process The Committee has decided to treat interest rate risk in the banking book under Pillar 2 (supervisory review process). Given the variety of underlying assumptions needed, the Committee 3 The Bank for International Settlements, Basle Committee on Banking Supervision, Vue d’ensemble du Nouvel accord de Bˆale sur les fonds propres, Basle, January 2001, p. 1. 4 Interested readers should also consult: The Bank for International Settlements, Basle Committee on Banking Control, The New Basle Capital Accord, January 2001; and The Bank for International Settlements, Basle Committee on Banking Control, The New Basle Capital Accord: An Explanatory Note, January 2001.

The Regulatory Context

5

believes that a better and more risk-sensitive treatment can be achieved through the supervisory review process rather than through minimum capital requirements. Under the second pillar of the New Accord, supervisors should ensure that each bank has sound internal processes in place to assess the adequacy of its capital based on a thorough evaluation of its risks. The new framework stresses the importance of bank’s management developing an internal capital assessment process and setting targets for capital that are commensurate with the bank’s particular risk proﬁle and control environment.

C. Third aspect: Market discipline The Committee regards the bolstering of market discipline through enhanced disclosure as a fundamental part of the New Accord.5 The Committee believes the disclosure requirements and recommendations set out in the second consultative package will allow market participants to assess key pieces of information on the scope of application of the revised Accord, capital, risk exposures, assessment and management processes, and capital adequacy of banks. The risk-sensitive approaches developed by the Committee rely extensively on banks’ internal methodologies giving banks more discretion in calculating their capital requirements. Separate disclosure requirements are put forth as prerequisites for supervisory recognition of internal methodologies for credit risk, credit risk mitigation techniques and asset securitisation. In the future, disclosure prerequisites will also attach to advanced approaches to operational risk. In the view of the Committee, effective disclosure is essential to ensure that market participants can better understand banks’ risk proﬁles and the adequacy of their capital positions.

1.2.2 Basle II and the philosophy of operational risk6 In February 2003, the Basle Committee published a new version of the document Sound Practices for the Management and Supervision of Operational Risk. It contains a set of principles that make up a structure for managing and supervising operational risks for banks and their regulators. In fact, risks other than the credit and market risks can become more substantial as the deregulation and globalisation of ﬁnancial services and the increased sophistication of ﬁnancial technology increase the complexity of the banks’ activities and therefore that of their risk proﬁle. By way of example, the following can be cited: • The increased use of automated technology, which if not suitably controlled, can transform the risk of an error during manual data capture into a system breakdown risk. • The effects of e-business. • The effects of mergers and acquisitions on system integration. • The emergence of banks that offer large-scale services and the technical nature of the high-performance back-up mechanisms to be put in place. 5

See also Point 1.3, which deals with accounting standards. This section is essentially a summary of the following publication: The Bank for International Settlements, Basle Committee on Banking Control, Sound Practices for the Management and Supervision of Operational Risk, Basle, February 2003. In addition, interested readers can also consult: Cruz M. G., Modelling, Measuring and Hedging Operational Risk, John Wiley & Sons, Ltd, 2003; Hoffman D. G., Managing Operational Risk: 20 Firm-Wide Best Practice Strategies, John Wiley & Sons, Inc., 2002; and Marshall C., Measuring and Managing Operational Risks in Financial Institutions, John Wiley & Sons, Inc., 2001. 6

6

Asset and Risk Management

• The use of collateral,7 credit derivatives, netting and conversion into securities, with the aim of reducing certain risks but the likelihood of creating other kinds of risk (for example, the legal risk – on this matter, see Point 2.2.1.4 in the section on ‘Positioning the legal risk’). • Increased recourse to outsourcing and participation in clearing systems. 1.2.2.1 A precise deﬁnition? Operational risk, therefore, generally and according to the Basle Committee speciﬁcally, is deﬁned as ‘the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events’. This is a very wide deﬁnition, which includes legal risk but excludes strategic and reputational risk. The Committee emphasises that the precise approach chosen by a bank in the management of its operational risks depends on many different factors (size, level of sophistication, nature and complexity of operations, etc.). Nevertheless, it provides a more precise deﬁnition by adding that despite these differences, clear strategies supervised by the board of directors and management committee, a solid ‘operational risk’ and ‘internal control’ culture (including among other things clearly deﬁned responsibilities and demarcation of tasks), internal reporting, and plans for continuity8 following a highly damaging event, are all elements of paramount importance in an effective operational risk management structure for banks, regardless of their size and environment. Although the deﬁnition of operational risk varies de facto between ﬁnancial institutions, it is still a certainty that some types of event, as listed by the Committee, have the potential to create substantial losses: • • • • • • •

Internal fraud (for example, insider trading of an employee’s own account). External fraud (such as forgery). Workplace safety. All matters linked to customer relations (for example, money laundering). Physical damage to buildings (terrorism, vandalism etc.). Telecommunication problems and system failures. Process management (input errors, unsatisfactory legal documentation etc.).

1.2.2.2 Sound practices The sound practices proposed by the Committee are based on four major themes (and are subdivided into 10 principles): • Development of an appropriate risk management environment. • Identiﬁcation, assessment, monitoring, control and mitigation in a risk management context. • The role of supervisors. • The role of disclosure. 7 8

On this subject, see 2.1.1.4. On this subject, see 2.1.1.3.

The Regulatory Context

7

Developing an appropriate risk management environment Operational risk management is ﬁrst and foremost an organisational issue. The greater the relative importance of ethical behaviour at all levels within an institution, the more the risk management is optimised. The ﬁrst principle is as follows. The board of directors should be aware of the major aspects of the bank’s operational risks as a distinct risk category that should be managed, and it should approve and periodically review the bank’s operational risk management framework. The framework should provide a ﬁrm-wide deﬁnition of operational risk and lay down the principles of how operational risk is to be identiﬁed, assessed, monitored, and controlled/mitigated. In addition (second principle), the board of directors should ensure that the bank’s operational risk management framework is subject to effective and comprehensive internal audit9 by operationally independent, appropriately trained and competent staff. The internal audit function should not be directly responsible for operational risk management. This independence may be compromised if the audit function is directly involved in the operational risk management process. In practice, the Committee recognises that the audit function at some banks (particularly smaller banks) may have initial responsibility for developing an operational risk management programme. Where this is the case, banks should see that responsibility for day-to-day operational risk management is transferred elsewhere in a timely manner. In the third principle senior management should have responsibility for implementing the operational risk management framework approved by the board of directors. The framework should be consistently implemented throughout the whole banking organisation, and all levels of staff should understand their responsibilities with respect to operational risk management. Senior management should also have responsibility for developing policies, processes and procedures for managing operational risk in all of the bank’s material products, activities, processes and systems. Risk management: Identiﬁcation, assessment, monitoring and mitigation/control The fourth principle states that banks should identify and assess the operational risk inherent in all material products, activities, processes and systems. Banks should also ensure that before new products, activities, processes and systems are introduced or undertaken, the operational risk inherent in them is subject to adequate assessment procedures. Amongst the possible tools used by banks for identifying and assessing operational risk are: • Self- or risk-assessment. A bank assesses its operations and activities against a menu of potential operational risk vulnerabilities. This process is internally driven and often incorporates checklists and/or workshops to identify the strengths and weaknesses of the operational risk environment. Scorecards, for example, provide a means of translating qualitative assessments into quantitative metrics that give a relative ranking of different types of operational risk exposures. Some scores may relate to risks unique to a speciﬁc business line while others may rank risks that cut across business lines. Scores may address inherent risks, as well as the controls to mitigate them. In addition, scorecards may be used by banks to allocate economic capital to business lines in relation to performance in managing and controlling various aspects of operational risk. 9

See 2.2.1.3.

8

Asset and Risk Management

• Risk mapping. In this process, various business units, organisational functions or process ﬂows are mapped by risk type. This exercise can reveal areas of weakness and help prioritise subsequent management action. • Risk indicators. Risk indicators are statistics and/or metrics, often ﬁnancial, which can provide insight into a bank’s risk position. These indicators tend to be reviewed on a periodic basis (such as monthly or quarterly) to alert banks to changes that may be indicative of risk concerns. Such indicators may include the number of failed trades, staff turnover rates and the frequency and/or severity of errors and omissions. • Measurement. Some ﬁrms have begun to quantify their exposure to operational risk using a variety of approaches. For example, data on a bank’s historical loss experience could provide meaningful information for assessing the bank’s exposure to operational risk. In its ﬁfth principle, the Committee asserts that banks should implement a process to regularly monitor operational risk proﬁles and material exposures to losses. There should be regular reporting of pertinent information to senior management and the board of directors that supports the proactive management of operational risk. In addition (sixth principle), banks should have policies, processes and procedures to control and/or mitigate material operational risks. Banks should periodically review their risk limitation and control strategies and should adjust their operational risk proﬁle accordingly using appropriate strategies, in light of their overall risk appetite and proﬁle. The seventh principle states that banks should have in place contingency and business continuity plans to ensure their ability to operate on an ongoing basis and limit losses in the event of severe business disruption. Role of supervisors In the eighth principle banking supervisors should require that all banks, regardless of size, have an effective framework in place to identify, assess, monitor and control/mitigate material operational risks as part of an overall approach to risk management. In the ninth principle supervisors should conduct, directly or indirectly, regular independent evaluation of a bank’s policies, procedures and practices related to operational risks. Supervisors should ensure that there are appropriate mechanisms in place which allow them to remain apprised of developments at banks. Examples of what an independent evaluation of operational risk by supervisors should review include the following: • The effectiveness of the bank’s risk management process and overall control environment with respect to operational risk; • The bank’s methods for monitoring and reporting its operational risk proﬁle, including data on operational losses and other indicators of potential operational risk; • The bank’s procedures for the timely and effective resolution of operational risk events and vulnerabilities; • The bank’s process of internal controls, reviews and audit to ensure the integrity of the overall operational risk management process; • The effectiveness of the bank’s operational risk mitigation efforts, such as the use of insurance; • The quality and comprehensiveness of the bank’s disaster recovery and business continuity plans; and

The Regulatory Context

9

• The bank’s process for assessing overall capital adequacy for operational risk in relation to its risk proﬁle and, if appropriate, its internal capital targets. Role of disclosure Banks should make sufﬁcient public disclosure to allow market participants to assess their approach to operational risk management.

1.3 ACCOUNTING STANDARDS The ﬁnancial crisis that started in some Asian countries in 1998 and subsequently spread to other locations in the world revealed a need for reliable and transparent ﬁnancial reporting, so that investors and regulators could take decisions with a full knowledge of the facts. 1.3.1 Standard-setting organisations10 Generally speaking, three main standard-setting organisations are recognised in the ﬁeld of accounting: • The IASB (International Accounting Standards Board), dealt with below in 1.3.2. • The IFAC (International Federation of Accountants). • The FASB (Financial Accounting Standards Board). The International Federation of Accountants, or IFAC,11 is an organisation based in New York that combines a number of professional accounting organisations from various countries. Although the IASB concentrates on accounting standards, the aim of the IFAC is to promote the accounting profession and harmonise professional standards on a worldwide scale. In the United States, the standard-setting organisation is the Financial Accounting Standards Board or FASB.12 Although it is part of the IASB, the FASB has its own standards. Part of the FASB’s mandate is, however, to work together with the IASB in establishing worldwide standards, a process that is likely to take some time yet. 1.3.2 The IASB13 In 1998 the ministers of ﬁnance and governors of the central banks from the G7 nations decided that private enterprises in their countries should comply with standards, principles and good practice codes decided at international level. They then called on all the countries involved in the global capital markets to comply with these standards, principles and practices. Many countries have now committed themselves, including most notably the European Union, where the Commission is making giant strides towards creating an obligation for all quoted companies, to publish their consolidated ﬁnancial reports in compliance with IAS standards. The IASB or International Standards Accounting Board is a private, independent standard-setting body based in London. In the public interest, the IASB has developed 10

http://www.cga-canada.org/fr/magazine/nov-dec02/Cyberguide f.htm. Interested readers should consult http://www.ifac.org. 12 Interested readers should consult http://www.fasb.org. 13 Interested readers should consult http://www.iasc.org.uk/cmt/0001.asp. 11

10

Asset and Risk Management

a set of standardised accounting rules that are of high quality and easily understandable (known as the IAS Standards). Financial statements must comply with these rules in order to ensure suitable transparency and information value for their readers. Particular reference is made to Standard IAS 39 relating to ﬁnancial instruments, which is an expression of the IASB’s wish to enter the essence of balance-sheet items in terms of fair value. In particular, it demands that portfolios derived from cover mechanisms set up in the context of asset and liability management be entered into the accounts at market value (see Chapter 12), regardless of the accounting methods used in the entries that they cover. In the ﬁeld of ﬁnancial risk management, it should be realised that in addition to the impact on asset and liability management, these standards, once adopted, will doubtless affect the volatility of the results published by the ﬁnancial institutions as well as affecting equity capital ﬂuctuations.

2 Changes in Financial Risk Management 2.1 DEFINITIONS Within a ﬁnancial institution, the purpose of the risk management function is twofold. 1. It studies all the quantiﬁable and non-quantiﬁable factors (see 2.1.1 below) that in relation to each individual person or legal entity pose a threat to the return generated by rational use of assets and therefore to the assets themselves. 2. It provides the following solutions aimed at combating these factors. — Strategic. The onus is on the institution to propose a general policy for monitoring and combating risks, ensure sensible consolidation of risks at group management level where necessary, organise the reports sent to the management committee, participate actively in the asset and liability management committee (see Chapter 12) and so on. — Tactical. This level of responsibility covers economic and operational assessments when a new activity is planned, checks to ensure that credit has been spread safely across various sectors, the simulation of risk coverage for exchange interest rate risk and their impact on the ﬁnancial margin, and so on. — Operational. These are essentially ﬁrst-level checks that include monitoring of internal limits, compliance with investment and stop loss criteria, traders’ limits, etc. 2.1.1 Typology of risks The risks linked to ﬁnancial operations are classically divided into two major categories: 1. Ex ante non-quantiﬁable risks. 2. Ex ante quantiﬁable risks. 2.1.1.1 Standard typology It is impossible to overemphasise the importance of proactive management in the avoidance of non-quantiﬁable risks within ﬁnancial institutions, because: 1. Although these risks cannot be measured, they are, however, identiﬁable, manageable and avoidable. 2. The ﬁnancial consequences that they may produce are measurable, but a posteriori only. The many non-quantiﬁable risks include: 1. The legal risk (see 2.2.1.4), which is likely to lead to losses for a company that carries on ﬁnancial deals with a third-party institution not authorised to carry out deals of that type.

12

Asset and Risk Management

2. The media risk, when an event undermines conﬁdence in or the image of a given institution. 3. The operational risk (see 2.1.1.2 below), although recent events have tended to make this risk more quantiﬁable in nature. The quantiﬁable risks include: 1. The market risk, which is deﬁned as the impact that changes in market value variables may have on the position adopted by the institution. This risk is subdivided into: — interest rate risk; — FX risk; — price variation risk; — liquidity risk (see 2.1.1.4) 2. The credit risk that arises when an opposite party is unable or unwilling to fulﬁl his contractual obligations: — relative to the on-balance sheet (direct); — relative to the off-balance sheet (indirect); — relating to delivery (settlement risk). 2.1.1.2 Operational risk1 According to the Basle Committee, operational risk is deﬁned as the risk of direct or indirect loss resulting from inadequate or failed internal processes, people and systems or from external events. In the ﬁrst approach, it is difﬁcult to classify risks of this type as ones that could be quantiﬁed a priori, but there is a major change that makes the risk quantiﬁable a priori. In fact, the problems of corporate governance, cases of much-publicised internal checks that brought about the downfall of certain highly acclaimed institutions, the combination of regulatory pressure and market pressure have led the ﬁnancial community to see what it has been agreed to call operational risk management in a completely different light. Of course operational risk management is not a new practice, its ultimate aim being to manage the added volatility of the results as produced by the operational risk. The banks have always attached great importance to attempts at preventing fraud, maintaining integrity of internal controls, reducing errors and ensuring that tasks are appropriately segregated. Until recently, however, the banks counted almost exclusively on internal control mechanisms within operational entities, together with the internal audit,2 to manage their operational risks. This type of management, however, is now outdated. We have moved on from operational risk management fragmented into business lines to transfunctional integrity; the attitude is no longer reactive but proactive. We are looking towards the future instead of back to the past, and have turned from ‘cost avoidance’ to ‘creation of value’. 1

See also Point 1.2.2. Interested readers should consult the Bank for International Settlements, Basle Committee for Banking Controls, Internal Audit in Banks and the Supervisor’s Relationship with Auditors, Basle, August 2001. 2

Changes in Financial Risk Management

13

The operational risk management of today also includes: • Identifying and measuring operational risks. • Analysing potential losses and their causes, as well as ways of reducing and preventing losses. • Analysing risk transfer possibilities. • Allocating capital speciﬁcally to operational risk. It is speciﬁcally this aspect of measurement and quantiﬁcation that has brought about the transition from ex post to ex ante. In fact, methodological advances in this ﬁeld have been rapid and far-reaching, and consist essentially of two types of approach. • The qualitative approach. This is a process by which management identiﬁes the risks and controls in place in order to manage them, essentially by means of discussions and workshops. As a result, the measurement of frequency and impact is mostly subjective, but it also has the advantage of being prospective in nature, and thus allows risks that cannot be easily quantiﬁed to be understood. • The quantitative approach. A speciﬁc example, although not the only one, is the loss distribution approach, which is based on a database of past incidents treated statistically using a Value at Risk method. The principal strength of this method is that it allows the concept of correlation between risk categories to be integrated, but its prospective outlook is limited because it accepts the hypothesis of stationarity as true. Halfway between these two approaches is the scorecards method, based on risk indicators. In this approach, the institution determines an initial regulatory capital level for operational risk, at global level and/or in each trade line. Next, it modiﬁes this total as time passes, on the basis of so-called scorecards that attempt to take account of the underlying risk proﬁle and the risk control environment within the various trade lines. This method has several advantages: • It allows a speciﬁc risk proﬁle to be determined for each organisation. • The effect on behaviour is very strong, as managers in each individual entity can act on the risk indicators. • It allows the best practices to be identiﬁed and communicated within the organisation. It is, however, difﬁcult to calibrate the scorecards and allocate speciﬁc economic funds. A reﬁned quantiﬁcation of operational risk thus allows: • Its cost (expected losses) to be made clear. • Signiﬁcant exposures (unexpected losses) to be identiﬁed. • A framework to be produced for proﬁt-and-cost analysis (and excessive controls to be avoided). In addition, systematic analysis of the sources and causes of operational losses leads to: • Improvements in processes and quality. • Optimal distribution of best practices.

14

Asset and Risk Management

A calculation of the losses attributable to operational risk therefore provides a framework that allows the controls to be linked to performance measurement and shareholder value. That having been said, this approach to the mastery of operational risk must also allow insurance programmes to be rationalised (concept of risk transfer), in particular by integrating the business continuity plan or BCP into it. 2.1.1.3 The triptych: Operational risk – risk transfer – BCP See Figure 2.1. A. The origin, deﬁnition and objective of Business Continuity Planning A BCP is an organised set of provisions aimed at ensuring the survival of an organisation that has suffered a catastrophic event. The concept of BCP originated in the emergency computer recovery plans, which have now been extended to cover the human and material resources essential for ensuring continuity of a business’s activities. Because of this extension, activities that lead to the constitution of a BCP relate principally to everyone involved in a business and require coordination by all the departments concerned. In general, the BCP consists of a number of interdependent plans that cover three distinct ﬁelds. • The preventive plan: the full range of technical and organisational provisions applied on a permanent basis with the aim of ensuring that unforeseen events do not render critical functions and systems inoperative. • The emergency plan: the full range of provisions, prepared and organised in advance, required to be applied when an incident occurs in order to ensure continuity of critical systems and functions or to reduce the period of their non-availability. • The recovery plan: the full range of provisions, prepared and organised in advance, aimed at reducing the period of application of the emergency plan and re-establishing full service functionality as soon as possible.

Insurance

• Identification • Evaluation • Transfer • Prevention • A posteriori management

BCP

Figure 2.1 Triptych

Operational risk management

Changes in Financial Risk Management

15

B. The insurance context After the events of 11 September 2001, the thought processes and methods relating to the compilation of a BCP were reﬁned. Businesses were forced to realise that the issue of continuity needed to be overseen in its entirety (prevention, insurance, recovery plan and/or crisis management). The tensions prevailing in the insurance market today have only increased this awareness; reduction in capacity is pushing business towards a policy of self-insurance and, in consequence, towards the setting up of new processes believed to favour a more rapid recovery after the occurrence of a major incident. Several major actors in the market are currently reﬂecting on the role that they should play in this context, and some guidelines have already been laid down. The insurance and reinsurance companies thus have an essential communication role to play. They have a wealth of information that is unrivalled and clearly cannot be rivalled, on ‘prejudicial’ events and their causes, development, pattern and management. Sharing of insured persons’ experiences is a rich source of information for learning about processes, methods and errors so that clients may beneﬁt from them. Another source is training. The wealth of information available to them also allows insurers and reinsurers to provide well-informed advice based on a pragmatic approach to the problems encountered. In this context, the integration of BCP into the risk management function, provided that insurance management is also integrated, will bring the beneﬁts of shared information and allow better assessment of the practical opportunities for implementation Similarly, the undisputed links between certain insurance policies and the BCP also argue for integration, together with operational risk management, which must play an active role in the various analyses relating to the continuity plan. C. The connection between insurance and the BCP In order to illustrate our theme, here we examine three types of policy. • The ‘all risks’ policy, which guarantees the interests of the person taking out the insurance in all ﬁxed and movable assets owned or used by that person. The policy may include an extension of the ‘extra expenses’ cover. Such expenses correspond to the charges that the institution has to bear in order to function ‘normally’ following an incident (for example: hire of premises or equipment, additional working hours etc.). In this case, the insurance compensates for the full range of measures taken in the event of a BCP. • The ‘Business Interruption’ policy, which guarantees the institution against loss of income, interest and additional charges and business expenses arising from interruption of its activity in its premises following the occurrence of an insured event. The objective, in ﬁne, is to compensate losses that affect results following the occurrence of an incident covered by another guarantee (direct damage to property owned by the institution, for example). In this case also, the links are clear: the agreements concluded will compensate for the inevitable operating losses between the occurrence of the event and the resumption of activities as made possible more quickly by the BCP. • The ‘crisis management’ policy, which guarantees payment of consultants’ costs incurred by the institution in an effort to deal with its crisis situation, that is, to draw up plans of action and procedures to manage the crisis and ensure the communication and legal resources needed to contain it and minimise its initial effects. If an event

16

Asset and Risk Management

that satisﬁes the BCP implementation criteria occurs, this insurance policy will provide additional assistance in the effort to reduce the consequences of the crisis. In addition, this type of agreement usually sets out a series of events likely to lead to a ‘crisis situation’ (death of a key ﬁgure, government inquiry or investigation, violent incidents in the work place etc.). Bringing such a policy into parallel can thus provide an interesting tool for optimising developments in the BCP. D. The connection between operational risk and the BCP The starting hypothesis generally accepted for compiling a BCP takes account of the consequences, not the causes, of a catastrophic event. The causes, however, cannot be fully ignored and also need to be analysed to make the continuity plan as efﬁcient as possible. As operational risk is deﬁned as the risk of direct or indirect loss resulting from inadequate or failed internal processes, people and systems or from external events, there is a strong tendency for the measures provided for by the BCP to be designed following the occurrence of an operational risk. E. Speciﬁc expressions of the synergy The speciﬁc expression of the synergy described above can be: • Use of the BCP in the context of negotiations between the institution and the insurers. The premium payable and cover afforded under certain insurance policies (all risks and Business Interruption) may be directly inﬂuenced by the content of the institution’s BCP. Coordination of the said BCP within the risk management function thus favours orientation of the provisions in the direction ‘desired’ by the insurers and allows the strategies put in place to be optimised. • Once set up, the plan must be reﬁned as and when the operational risks are identiﬁed and evaluated, thus giving it added value. • Insurance policies can play a major ﬁnancial role in the application of the steps taken to minimise the effects of the crisis, and in the same order of ideas. • The possibility of providing ‘captive cover’ to deal with the expenses incurred in the application of the steps provided for in the BCP may also be of interest from the ﬁnancial viewpoint. 2.1.1.4 Liquidity risk:3 the case of a banking institution This type of risk arises when an institution is unable to cover itself in good time or at a price that it considers reasonable. A distinction is drawn between ongoing liquidity management, which is the role of the banking treasury, and liquidity crisis management. The Basle Committee asserts that these two aspects must be covered by the banking institutions’ asset and liability management committees. A crisis of liquidity can be reproduced in a simulation, using methods such as the maximum cash outﬂow, which allows the survival period to be determined. 3 Interested readers should consult the Bank for International Settlements, Basle Committee for Banking Controls, Sound Practices for Managing Liquidity in Banking Organisations, Basle, February 2000.

Changes in Financial Risk Management

17

A. Maximum Cash Outﬂow and Survival Period The ﬁrst stage consists of identifying the liquidity lines: 1. Is the institution a net borrower or net lender in the ﬁnancial markets, and does it have a strategic liquidity portfolio? 2. Can the bond and treasury bill portfolios be liquidated through repos and/or resales? 3. Can the ‘credit’ portfolios of the synthetic asset swap type be liquidated by the same means? And, last but not least: 4. What would be the potential level of assistance that may be expected from the reference shareholder or from other companies in the same group? An extreme liquidity crisis situation can then be simulated, on the premise that the institution cannot borrow on the markets and does not rely on assistance from its reference shareholder or other companies within the group. A number of working hypotheses can be taken as examples. On the crisis day (D) let us suppose that: • The institution has had no access to borrowing on the interbank market for ﬁve working days. • Both private and institutional clients have immediately withdrawn all their cash deposits within the legal framework: — All current accounts are repaid on D + 1. — All deposits with 24 and 48 hours’ notice are repaid on D + 1 and D + 2 respectively. — All savings accounts are repaid on D + 1. • The institution has to meet all its contractual obligations in terms of cash outﬂows: — The institution repays all the borrowings contracted out by it and maturing between D and D + 5. — The institution meets all the loans contracted out by it with start dates between D and D + 5. • The only course of action that the institution can take to obtain further liquidity is to sell its assets. — It is assumed, for example, that the treasury bill portfolio can be liquidated onequarter through repos on D + 1 and three-quarters by sales on D + 2. — It is assumed, for example, that the debenture and ﬂoating-rate note portfolios can be liquidated via repo or resale 85 % on D + 1, if the currency (GB£) allows, and by sale on D + 2. — It is assumed, for example, that the synthetic asset swap portfolio can be liquidated 30 % on D + 3, 30 % on D + 4 and the balance on D + 5 taking account of the ratings. The cash in and cash out movements are then simulated for each of the days being reviewed. As a result, the cash balance for each day will be positive or negative. The survival period is that for which the institution shows a positive cash balance. See Figure 2.2. In the following example it will be noted that in view of the hypothetical catastrophic situation adopted, the institution is nevertheless capable of facing a serious liquidity crisis for three consecutive dealing days without resorting to external borrowing.

18

Asset and Risk Management

Liquidity millions of US$

1000 800 600 400 200 0 –200 –400 –600 –800

0

1

2 3 Number of days

4

5

Figure 2.2 Survival period

It should, however, be noted that recourse to repos in particular will be much more effective if the ﬁnancial institution optimises its collateral management. We now intend to address this point. B. Collateral management4 Collateral management is one of the three techniques most commonly used in ﬁnancial markets in order to manage credit risks, and most notably counterparty risk. The main reason for the success of collateral management is that the transaction-related costs are limited (because the collateral agreement contracts are heavily standardised). The three ﬁelds in which collateral management is encountered are: 1. The repos market. 2. The OTC derivatives market (especially if the institution has no rating). 3. Payment and settlement systems. The assets used as collateral are: • Cash (which will be avoided as it inﬂates the balance sheet, to say nothing of the operational risks associated with transfers and the risk of depositor bankruptcy). • Government bonds (although the stocks are becoming weaker). • The effects of major indices (because these are liquid, as their capitalisation classiﬁes them as such indices). • Bonds issued by the private sector (although particular attention will be paid to rating here). Generally speaking, the counterparty receiving the collateral is clearly less exposed in terms of counterparty risk. There is, however, a credit risk on the collateral itself: the issuer risk (inherent in the bill) and the liquidity risk (associated with the bill). The risks linked to the collateral must be ‘monitored’, as both the product price variation 4 Interested readers should consult the Bank for International Settlements, BIS Quarterly Review, Collateral in Wholesale Financial Markets, Basle, September 2001, pp. 57–64. Also: Bank for International Settlements, Committee on the Global Financial System, Collateral in Wholesale Financial Markets: Recent Trends, Risk Management and Market Dynamics, Basle, March 2001.

Changes in Financial Risk Management

19

that necessitates the collateral and the collateral price variation have an effect on the coverage of the potential loss on the counterparty and the collateral that the counterparty will have provided. Collateral management is further complicated by the difﬁculty in estimating the correlation between collateral price ﬂuctuations and the ‘collateralised’ derivative. A negative correlation will signiﬁcantly increase the credit risk, as when the value of the collateral falls, the credit risk increases. The question of adjustment is of ﬁrst importance. Too much sophistication could lead to the risk of hesitation by the trader over whether to enter into ‘collateralised’ deals. Conversely, too little sophistication risks a shift from counterparty risk to issuer and liquidity risk, and what is the good of that? Collateral management improves the efﬁciency of the ﬁnancial markets; it makes access to the market easier. If it is used, more participants will make the competition keener; prices will be reduced and liquidity will increase. Cases of adverse effects have, however, been noted, especially in times of stress. The future of collateral management is rosy: the keener the competition in the ﬁnance markets, the tighter the prices and the greater the need for those involved to run additional risks. 2.1.2 Risk management methodology While quantiﬁable risks, especially market risks, can of course be measured, a good understanding of the risk in question will depend on the accuracy, frequency and interpretation of such measurement. 2.1.2.1 Value of one basis point (VBP) The VBP quantiﬁes the sensitivity of a portfolio to a parallel and unilateral upward or downward movement of the interest rate curve for a resolution of one one-hundredth per cent (or a basis point). See Figure 2.3. This simple method quantiﬁes the sensitivity of an asset or portfolio of assets to interest rates, in units of national currency; but it must be noted that the probability of a parallel ﬂuctuation in the curve is low, and that the method does not take account of any curvature or indeed any alteration in the gradient of the curve.

Rate

6.01 % 6.00 % 5.99 %

Current VBP Maturity dates

Figure 2.3 VBP

20

Asset and Risk Management

Finally, it should be noted that the measurement is immediate and the probability of occurrence is not grasped. 2.1.2.2 Scenarios and stress testing Scenarios and stress testing allow the rates to be altered at more than one point in the curve, upwards for one or more maturity dates and downwards for one or more maturity dates at the same time. See Figure 2.4. Rate

Current Stress testing Maturity dates

Figure 2.4 Stress testing

This method is used for simulating and constructing catastrophe scenarios (a forecast of what, it is assumed, will never happen). More reﬁned than the VBP, this method is more difﬁcult to implement but the time and probability aspects are not involved. 2.1.2.3 Value at risk (VaR) Regardless of the forecasting technique adopted, the VaR is a number that represents the maximum estimated loss for a portfolio that may be multi-currency and multi-product (expressed in units of national currency) due to market risks for a speciﬁc time horizon (such as the next 24 hours), with a given probability of occurrence (for example, ﬁve chances in 100 that the actual loss will exceed the VaR). See Figure 2.5. In the case of the VaR, as Figure 2.5 shows, we determine the movement of the curve that with a certain chance of occurrence (for example, 95 %) for a given time horizon (for example, the next 24 hours) will produce the least favourable ﬂuctuation in value for the portfolio in question, this ﬂuctuation being of course estimated. In other words, the Rate

Current VaR Maturity dates

Figure 2.5 VaR

Changes in Financial Risk Management

21

Table 2.1 VBP, stress testing and VaR VBP

Stress testing

Indication

‘Uniform’ sensitivity

‘Multi-way’ sensitivity

Time Probability Advantages

Immediate No Simple

Disadvantages

Not greatly reﬁned

Immediate No More realistic curve movement Probability of scenario occurring?

VaR Maximum estimated loss Time horizon Yes Standard and complete Methodological choice and hypotheses

actual loss observed must not exceed the VaR in more than 5 % of cases (in our example); otherwise, the VaR will be a poor estimation of the maximum loss. This method, which we explore in detail in Chapter 6, is complementary in comparison with VBP and stress testing. In other words, none of these methods should be judged sufﬁcient in itself, but the full range of methods should produce a sufﬁciently strong and reliable risk matrix. As can be seen from the comparison in Table 2.1, the VaR represents a priori the most comprehensive method for measuring market risk. However, methodological choices must be made and well-thought-out hypotheses must be applied in order to produce a realistic VaR value easily. If this is done, VaR can then be considered as the standard market for assessing risks inherent in market operations.

2.2 CHANGES IN FINANCIAL RISK MANAGEMENT 2.2.1 Towards an integrated risk management As Figure 2.6 shows, the risk management function is multidisciplinary, the common denominator being the risk vector. From this, an ‘octopus’ pattern is evident; there is only one step, but. . . 2.2.1.1 Scope of competence The risk management function must operate within a clearly deﬁned scope of competence, which will often be affected by the core business of the institution in question. Although it is generally agreed that the job of monitoring the market risk falls to risk management, for example, what happens to the risk of reputation, the legal risk (see 2.2.1.4), and the strategic risk? And let us not forget the operational risk: although the Basle Committee (see Chapter 1) explicitly excludes it from the ﬁeld of skills of internal audit and includes it in the ﬁeld of skills of risk management, it must be noted that a signiﬁcant number of institutions have not yet taken that step. Naturally, this leads to another problem. The controlling aspect of a risk management function is difﬁcult to deﬁne, as one is very often limited to certain back-ofﬁce control checks and there is also a tendency to confuse the type of tasks assigned to internal audit with those proper to risk management.

22

Asset and Risk Management Insu

ranc

e Property, Causality, liability Risk management Multi-line Multi-risk Insurance products

Financia

l Capital markets / Treasury risk Market risk, Liquidity risk Analytics & modelling

Strategic

Operatio

Credit analytics

Strategic risk

nal Engineering

COSO operations compliance

Quality

Control Selfassessment

Strategic, business, process & cultural Risk management

Integrated risk management

COSO financial

ess

Proc

Financial internal control

Figure 2.6 Integrated Risk Management Source: Deloitte & Touche

2.2.1.2 Back ofﬁce vs. risk management With regard to the back ofﬁce vs. risk management debate, it is well worth remembering that depending on the views of the regulator, the back ofﬁce generally deals with the administration of operations and as such must, like every other function in the institution, carry out a number of control checks. There are two types of back ofﬁce control check: • The daily control checks carried out by staff, for example each employee’s monitoring of their suspense account. • The ongoing continuous checks, such as monitoring of the accuracy and comprehensiveness of data communicated by persons responsible for business and operational functions in order to oversee the operations administratively. However, when mentioning checks to be made by risk management, one refers to exception process checks in accordance with the bank’s risk management policy, for example: • Monitoring any limit breaches (limits, stop losses etc.). • Monitoring (reconciliation of) any differences between positions (or results) taken (calculated) within various entities (front, back, accounting etc.). 2.2.1.3 Internal audit vs. risk management The role of audit in a ﬁnancial group is based on four main aspects: • Producing a coherent plan for the audit activities within the group.

Changes in Financial Risk Management

23

• Ensuring that the whole of the auditable activities, including the group’s subsidiaries and holding company within the responsibilities of the parent company, are covered through the conduct or review of audits. • Applying a uniform audit method across all group entities. • On the basis of a homogeneous style of reporting, providing the directors of the parent company and of the subsidiaries maximum visibility on the quality of their internal control systems. Although risk management by its very nature is also involved with the efﬁciency of the internal control system, it must be remembered that this function is a tool designed to help the management of the institution in its decision making. Risk management is therefore part of the auditable domain of the institution. We saw the various responsibilities of risk management in Section 2.1. 2.2.1.4 Position of legal risk In practice, every banking transaction is covered by a contract (spoken or written) that contains a certain degree of legal risk. This risk is more pronounced in transactions involving complex securities such as derivative products or security lending. From the regulator’s point of view, legal risk is the risk of contracts not being legally enforceable. Legal risk must be limited and managed through policies developed by the institution, and a procedure must be put in place for guaranteeing that the parties’ agreements will be honoured. Before entering into transactions related to derivatives, the bank must ensure that its counterparties have the legal authority to enter into these deals themselves. In addition, the bank must verify that the conditions of any contract governing its activities in relation to the counterpart are legally sound. The legal risk linked to stock-market deals can in essence be subdivided into four types of subrisk. 1. Product risk, which arises from the nature of the deal without taking into account the counterparty involved; for example, failure to evaluate the legal risk when new products are introduced or existing products are changed. 2. Counterparty risk. Here the main risk is that the counterparty does not have the legal capacity to embark on the deal in question. For example, the counterparty may not have the capacity to trade in derivative products or the regulatory authority for speciﬁc transactions, or indeed may not even have the authority to conclude a repo contract. 3. Transaction risk. This is certainly the most signiﬁcant part of the legal risk and covers actions undertaken in the conclusion of operations (namely, transaction and documentation). When the deal is negotiated and entered into, problems may arise in connection with regulatory or general legal requirements. For example: closing a spoken agreement without listing the risks involved beforehand, compiling legal documentation or contracts without involving the legal department, negotiating derivative product deals without involving the legal department or without the legal department reviewing the signed ISDA Schedules, signing Master Agreements with foreign counterparties without obtaining an outside legal opinion as to the validity of default, and ﬁnally documentary errors such as inappropriate signatures, failure to sign the document or

24

Asset and Risk Management

failure to set up procedures aimed at ensuring that all contractual documentation sent to counterparties is returned to the institution duly signed. 4. Process risk. In the event of litigation in connection with a deal or any other consequence thereof, it will be necessary to undertake certain action to ensure that the ﬁnancial consequences are minimised (protection of proof, coordination of litigation etc.). Unfortunately, this aspect is all too often missing: records and proof of transaction are often insufﬁcient (failure to record telephone conversations, destruction of emails etc.). These four categories of risk are correlated. Fundamentally, the legal risk can arise at any stage in the deal (pre-contractual operations, negotiation, conclusion and post-contractual procedures). In this context of risk, the position of the legal risk connected with the ﬁnancial deals within the risk management function presents certain advantages: • Assessment of the way in which the legal risk will be managed and reduced. • The function has a central position that gives an overall view of all the bank’s activities. • Increased efﬁciency in the implementation of legal risk management procedures in ﬁnancial transactions, and involvement in all analytical aspects of the legal risk on the capital market. 2.2.1.5 Integration It is worrying to note the abundance of energy being channelled into the so-called problem of the ‘fully integrated computerised risk-management system’. One and the same system for market risks, credit risks and operational risks? Not possible! The interesting problem with which we are confronted here is that of integrating systems for monitoring different types of risk. We have to ask ourselves questions on the real added value of getting everything communicated without including the unmentionable – the poorly secured accessories such as spreadsheets and other non-secured relational databases. Before getting involved with systems and expensive balance sheets relating to the importance of developing ‘black boxes’ we think it wiser to ask a few questions on the cultural integration of risk management within a business. The regulator has clearly understood that the real risk management debate in the next 10 years will be on a qualitative, not a quantitative, level. Before moving onto the quantitative models proposed by Basle II, should we not ﬁrst of all pay attention to a series of qualitative criteria by organising ourselves around them? Surely the ﬁgures produced by advanced operational risk methods are of a behavioural nature in that they show us the ‘score to beat’. To sum up, is it better to be well organised with professional human resources who are aware of the risk culture, or to pride ourselves on being the owners of the Rolls Royce of the Value at Risk calculation vehicles? When one remembers that Moody’s5 is attaching ever-increasing importance to the evaluation of operational risk as a criterion for awarding its ratings, and the impact of these ratings on ﬁnance costs, is it not worth the trouble of achieving compliance from 5

Moody’s, Moody’s Analytical framework for Operational Risk Management of Banks, Moody’s, January 2003.

Changes in Financial Risk Management

25

the qualitative viewpoint (notwithstanding the savings made on capital through bringing the equity fund into line)? A risk management function should ideally: • Report directly to executive management. • Be independent of the front and back ofﬁce functions. • Be located at a sufﬁciently senior hierarchical level to guarantee real independence, having the authority and credibility it needs to fulﬁl its function, both internally (especially vis-`a-vis the front and back ofﬁces) and externally (vis-`a-vis the regulator, external audit and the ﬁnancial community in general). • Be a member of the asset and liability management committee. • Where necessary, oversee all the decentralised risk-management entities in the subsidiaries. • Have as its main task the proposal of an institution-wide policy for monitoring risks and ensuring that the decisions taken by the competent bodies are properly applied, relying on the methodologies, tools and systems that it is responsible for managing. • Have a clearly deﬁned scope of competence, which must not be limited to market and credit risks but extend to operational risks (including insurance and BCP), the concentration risk and the risks linked to asset management activity in particular. • Play a threefold role in the ﬁeld of risks: advice, prevention and control. But at what price? 2.2.2 The ‘cost’ of risk management A number of businesses believed that they could make substantial savings by spending a bare minimum on the risk management function. It is this serious lack of foresight, however, that has led to collapse and bankruptcy in many respectable institutions. The commonest faults are: 1. One person wearing two ‘hats’ for the front and back ofﬁce, a situation that is, to say the least, conducive to fraud. 2. Non-existence of a risk management function. 3. Inability of management or persons delegated by management to understand the activities of the market and the products used therein. 4. Lack of regular and detailed reporting. 5. Lack of awareness of employees at all levels, of quantiﬁable and/or non-quantiﬁable risks likely to be generated, albeit unwittingly, by those employees. 6. Incompatibility of volumes and products processed both with the business and with back-ofﬁce and accounting procedures. At present, market and regulatory pressure is such that it is unthinkable for a respectable ﬁnancial institution not to have a risk management function. Instead of complaining about its cost, however, it is better to make it into a direct and indirect proﬁt centre for the institution, and concentrate on its added value. We have seen that a well-thought-out risk management limits: • Excessive control (large-scale savings, prevention of doubling-up).

26

Asset and Risk Management

• Indirect costs (every risk avoided is a potential loss avoided and therefore money gained). • Direct costs (the capital needed to be exposed to the threefold surface of market, credit and operational risk is reduced). The promotion of a real risk culture increases the stability and quality of proﬁts, and therefore improves the competitive quality of the institution and ensures that it will last.

2.3 A NEW RISK-RETURN WORLD 2.3.1 Towards a minimisation of risk for an anticipated return Assessing the risk from the investor’s point of view produces a paradox: • On one hand, taking the risk is the only way of making the money. In other terms, the investor is looking for the risk premium that corresponds to his degree of aversion to risk. • On the other hand, however, although accepting the ‘risk premium’ represents proﬁt ﬁrst and foremost, it also unfortunately represents potential loss. We believe that we are now moving from an era in which investors continually looked to maximise return for a given level of risk (or without thinking about risk at all), into a new era in which the investor, for an anticipated level of return, will not rest until the attendant risk has been minimised. We believe that this attitude will prevail for two different reasons: 1. Institutions that offer ﬁnancial services, especially banks, know the levels of return that their shareholders demand. For these levels of return, their attitude will be that they must ﬁnd the route that allows them to achieve their objective by taking the smallest possible risk. 2. The individual persons and legal entities that make up the clientele of these institutions, faced with an economic future that is less certain, will look for a level of return that at least allows them to preserve their buying and investing power. This level is therefore known, and they will naturally choose the ﬁnancial solution that presents the lowest level of risk for that level.

2.3.2 Theoretical formalisation As will be explained in detail in Section 3.1.16 in the section on equities, the return R is a random factor for which the probability distribution is described partly by two parameters: a location index, the expected value of which is termed E(R), and a dispersion index, the variance which is noted var(R). The ﬁrst quantity corresponds to the expected return. 6 Readers are referred to this section and to Appendix 2 for the elements of probability theory needed to understand the considerations that follow.

Changes in Financial Risk Management

27

E(R)

P

var(R)

Figure 2.7 Selecting a portfolio

E(R)

P

E

Q

var(R)

Figure 2.8 Selecting a portfolio

√ The square root of the second, σ (R) = var(R), is the standard deviation, which is a measurement of risk. A portfolio, like any isolated security, will therefore be represented by a mean-variance couple. This couple depends on the expected return level and variance on return for the various assets in the portfolio, but also on the correlations between those assets. A portfolio will be ‘ideal’ for an investor (that is, efﬁcient), if, for a given expected return, it has a minimal variance or if, for a ﬁxed variance, it has a maximum expected return. All the portfolios thus deﬁned make up what is termed the efﬁcient frontier, which can be represented graphically in the Figure 2.7. In addition, in the same plane the indifference curves represent the portfolios with an equivalent mean-variance combination in the investor’s eyes (that is, they have for him the same level of utility7 ). The selection is therefore made theoretically by choosing the portfolio P from the efﬁciency frontier located on the indifference curve located furthest away (that is, with the highest level of utility), as shown in Figure 2.7. In a situation in which an investor no longer acts on the basis of a classic utility structure, but instead wishes for a given return E and then tries to minimise the variance, the indifference curves will be cut off at the ordinate E and the portfolio selected will be Q, which clearly presents a lower expected return than that of P but also carries a lower risk that P . See Figure 2.8. 7

Readers are referred to Section 3.2.7.

Part II Evaluating Financial Assets

Introduction 3 Equities 4 Bonds 5 Options

30

Asset and Risk Management

Introduction Two fundamental elements Evaluation of ﬁnancial assets should take account of two fundamental aspects – chance and time. The random aspect It is obvious that the changes in value of a ﬁnancial asset cannot be predicted in a deterministic manner purely by looking at what happened in the past. It is quite clear that for equities, whose rates ﬂuctuate according to the law of supply and demand, these rates are themselves dictated by the perception that market participants have of the value of the business in question. The same applies to products that are sometimes deﬁned as ‘risk-free’, such as bonds; here, for example, there is the risk of bankruptcy, the risk of possible change and the risk posed by changes in interest rates. For this reason, ﬁnancial assets can only be evaluated in a random context and the models that we will be putting together cannot work without the tool of probability (see Appendix 2 for the essential rules). The temporal aspect Some ﬁnancial asset valuation models are termed monoperiodic, such as Markowitz’s portfolio theory. These models examine the ‘photograph’ of a situation at a given moment and use historical observations to analyse that situation. On the other hand, there may be a wish to take account of development over time, with the possible decision for any moment according to the information available at that moment. The random variables mentioned in the previous paragraph then turn into stochastic processes and the associated theories become much more complex. For this reason, the following chapters (3, 4 and 5) will feature both valuation models (from the static viewpoint) and development models (from the dynamic viewpoint). In addition, for the valuation of options only, the development models for the underlying asset are essential because of the intrinsic link between this product and the time variable. The dynamic models can be further divided into discrete models (where development is observed at a number of points spaced out over time) and continuous models (where the time variable takes its values within a continuous range such as an interval). The mathematical tools used for this second model are considerably more complex. Two basic principles The evaluation (or development) models, like all models, are based on a certain number of hypotheses. Some of these are purely technical and have the aim of guaranteeing the meaning of the mathematical expressions that represent them; they vary considerably according to the model used (static or dynamic, discrete or continuous) and may take the form of integrability conditions, restrictions on probability laws, stochastic processes, and so on. Other hypotheses are dictated by economic reality and the behaviour of investors,1 and we will be covering the two economic principles generally accepted in ﬁnancial models here. 1

We will be touching on this last aspect in Section 3.2.6

Evaluating Financial Assets

31

The perfect market Often, a hypothesis that is so simplistic as to be unrealistic – that of the perfect market – will be put forward. Despite its reductive nature, it deﬁnes a context in which ﬁnancial assets can be modelled and many studies have been conducted with the aim of weakening the various elements in this hypothesis. The perfect market2 is a market governed by the law of supply and demand, on which: • Information is available in equal measure to all investors. • There are no transactional or issue costs associated with the ﬁnancial assets. • There is no tax deduction on the income produced by the ﬁnancial assets (where increases in value or dividends are involved, for example). • Short sales are authorised without restriction. Absence of arbitrage opportunity An arbitrage opportunity is a portfolio deﬁned in a context in which: • No ﬁnancial movement occurs within the portfolio during the period in question. • The risk-free interest rate does not alter during the period in question and is valid for any maturity date (a ﬂat, constant rate curve). This is a portfolio with an initial value (value at the point of constitution) that is negative but presents a certain positive value at a subsequent time. More speciﬁcally, if the value of the portfolio at the moment t is termed Vt , we are looking at a portfolio for which: V0 < 0 and VT ≥ 0 or V0 ≤ 0 and VT > 0. Generally speaking, the absence of arbitrage opportunity hypothesis is constructed in the ﬁnancial modelling process. In fact, if it is possible to construct such portfolios, there will be considerable interest in putting together a large number of them. However, the numerous market operations (purchases/sales) that this process would require would lead, through the effect of supply and demand, to alterations to the prices of the various portfolio components until the proﬁts obtained through the position of arbitrage would all be lost. Under this hypothesis, it can therefore be said that for a portfolio of value V put together at moment 0, if VT = 0, no ﬁnancial movement occurs in that portfolio between 0 and T and the interest rate does not vary during that period and is valid for any maturity date (ﬂat, constant rate curve), then Vt = 0 for any t ∈ [0; T ]. This hypothesis of absence of arbitrage can be expressed as follows: in the context mentioned above, a portfolio which has been put together so as not to contain any random element will always present a return equal to the risk-free rate of interest. The concept of ‘valuation model’ A valuation model for a ﬁnancial asset is a relation that expresses quite generally the price p (or the return) for the asset according to a number of explanatory variables3 2

See for example Miller and Modigliani, Dividend policy, growth and the valuation of shares, Journal of Business, 1961. In these circumstances it is basically the risk of the security that is covered by the study; these explanatory variables are known as risk factors. 3

32

Asset and Risk Management

X1 , X2 , . . . , Xn that represent the element(s) of the market likely to affect the price: p = f (X1 , X2 , . . . , Xn ) + ε. The residual ε corresponds to the difference between reality (the effective price p) and the valuation model (the function f ). Where the price valuation model is a linear model (as for equities), the risk factors combine together to give, through the Central Limit Theorem, a distribution for the variable p that is normal (at least in the ﬁrst approximation), and is therefore deﬁned by the two mean-variance parameters only. On the other hand, for some types of assets such as options, the valuation model ceases to be linear. The previous reasoning is no longer valid and neither are its conclusions. We should state that alongside the risk factors that we will be mentioning, the explanatory elements of the market risk can also include: • The imperfect nature of valuation models. • The imperfect knowledge of the rules and limitations particular to the institution. • The impossibility of anticipating changes to legal regulations. We should also point out that alongside this market risk, the investor will be confronted with other types of risk that correspond to the occurrence of exceptional events such as wars, oil crises etc. This group of risks cannot of course be evaluated using techniques designed for the risk market. The technique presented here will not therefore be including these ‘event-based’ risks. However, this does not mean that the careful risk manager should not include ‘catastrophe scenarios’, in order to take account of the exceptional risks, alongside the methods designed to deal with the market risks. In this section we will be covering a number of general principles relative to valuation models, and mentioning one or another speciﬁc model4 that will be analysed in further detail in this second part. Linear models We will look ﬁrst at the simple case in which the function f of the valuation model is linear, or more speciﬁcally, the case in which the price variation p = pt − p0 is a ﬁrst-degree function of the variations X1 , . . . , Xn of the various explanatory variables and of that (ε) of the residue: p = a0 + a1 X1 + . . . + an Xn + ε. An example of the linear valuation model is the Sharpe simple index model used for equities (see Section 3.2.4). This model suggests that the variation5 in price of an equity is a ﬁrst-degree function of the variation in a general index of the market (of course, the coefﬁcients of this ﬁrst-degree function vary from one security to another: p = α + βI + ε. In practice, the coefﬁcients α and β are evaluated using a regression technique.6 4 Brearley R. A. and Myers S. C., Principles of Corporate Finance, McGraw-Hill, 1991. Broquet C., Cobbaut R., Gillet R. and Vandenberg A., Gestion de Portefeuille, De Boeck, 1997. Copeland T. E. and Weston J. F., Financial Theory and Corporate Policy, Addison-Wesley, 1988. ´ Devolder P., Finance Stochastique, Editions de l’ULB, 1993. ´ Roger P., L’Evalation Des Actifs Financiers, De Boeck, 1996. 5 This is a relative variation in price, namely a return. The same applies to the index. 6 Appendix 3 contains the statistical base elements needed to understand this concept.

Evaluating Financial Assets

33

Nonlinear models independent of time A more complex case is that in which the function f of the relation p = f (X1 , X2 , . . . , Xn ) + ε is not linear. When time is not taken into consideration, p is evaluated using a Taylor development, as follows: p =

n k=1

fX k (X1 , . . . , Xn )Xk +

n

n

1 f (X1 , . . . , Xn )Xk Xl + . . . + ε 2! k=1 l=1 Xk Xl

For as long as the Xk variations in the explanatory variables are low, the terms of the second order and above can be disregarded and it is possible to write: p ≈

n k=1

fX k (X1 , . . . , Xn )Xk + ε

This brings us back to a linear model, which will then be processed as in the previous paragraph. For example, for bonds, when the price of the security is expressed according to the interest rate, we are looking at a nonlinear model. If one is content to approximate using only the duration parameter (see Section 4.2.2), a linear approximation will be used. If, however one wishes to introduce the concept of convexity (see Section 4.2.3), the Taylor development used shall take account of the second-degree term. Nonlinear models dependent on time For some types of asset, duration is of fundamental importance and time is one of the arguments of the function f . This is the case, for example, with conditional assets; here, the life span of the contract is an essential element. In this case, there is a need to construct speciﬁc models that take account of this additional ingredient. We no longer have a stationary random model, such as Sharpe’s example, but a model that combines the random and temporal elements; this is known as a stochastic process. An example of this type of model is the Black–Scholes model for equity options (see Section 5.3.2), where the price p is a function of various variables (price of underlying asset, realisation price, maturity, volatility of underlying asset, risk-free interest rate). In this model, the price of the underlying asset is itself modelled by a stochastic process (standard Brownian motion).

3 Equities 3.1 THE BASICS An equity is a ﬁnancial asset that corresponds to part of the ownership of a company, its value being indicative of the health of the company in question. It may be the subject of a sale and purchase, either by private agreement or on an organised market. The law of supply and demand on this market determines the price of the equity. The equity can also give rise to the periodic payment of dividends. 3.1.1 Return and risk 3.1.1.1 Return on an equity Let us consider an equity over a period of time [t − 1; t] the duration of which may be one day, one week, one month or one year. The value of this equity at the end of the period, and the dividend paid during the said period, are random variables1 referred to respectively as Ct and Dt . The return on the equity during the period in question is deﬁned as: Rt =

Ct − Ct−1 + Dt Ct−1

We are therefore looking at a value without dimension, which can easily be broken down into the total of two terms: Ct − Ct−1 Dt Rt = + Ct−1 Ct−1 • The ﬁrst of these is the increase in value, which is ﬁctitious in that the holder of the equity does not proﬁt from it unless the equity is sold at the moment t. • The second is the rate of return, which is real as it represents an income. If one wishes to take account of the rate of inﬂation when deﬁning the return parameter, the nominal return Rt(n) (excluding inﬂation), the real return Rt(r) (with inﬂation) and the rate of inﬂation τ are all introduced. They are linked by the relation 1 + Rt(n) = (1 + Rt(r) ) · (1 + τ ). The real return can then be easily calculated: Rt(r) = 1

1 + Rt(n) −1 1+τ

Appendix 2 contains the basic elements of probability theory needed to understand these concepts.

36

Asset and Risk Management

Example An equity is quoted at 1000 at the end of May and 1050 at the end of June; it paid a dividend of 80 on 12 June. Its (monthly) return for this period is therefore: Rj une =

1050 − 1000 + 80 = 0.13 = 13 % 1000

This consists of an increase in value of 5 % and a rate of return of 8 %. We are looking here at the nominal return. If the annual rate of inﬂation for that year is 5 %, the real return will be: Rj(r) une =

1.13 − 1 = 0.1254 = 12.54 % (1.05)1/12

For certain operations carried out during the return calculation period, such as division or merging of equities, free issue or increase in capital, the principle of deﬁnition of return is retained, but care is taken to include comparable values only in the formula. Therefore, when an equity is split into X new equities, the return will be determined by: Rt =

X · Ct − Ct−1 + Dt Ct−1

or

X · Ct − Ct−1 + X · Dt Ct−1

This will depend on whether the dividends are paid before or after the date of the split. If a return is estimated on the basis of several returns relating to the same duration but for different periods (for example, ‘average’ monthly return estimated on the basis of 12 monthly returns for the year in question), then mathematical common sense dictates that the following logic should be applied: 1 + R1 year = (1 + R1 ) · (1 + R2 ) · . . . · (1 + R12 ) Therefore: R1 month =

(1 + R1 ) · . . . · (1 + R12 ) − 1

12

The expression (1 + R1 month ) is the geometric mean of the corresponding expressions for the different months. We therefore arrive at, and generally use in practice, the arithmetic mean. R1 + . . . + R12 R1 month = 12 This last relation is not in fact correct, as is shown by the example of a security quoted at 1000, 1100 and 1000 at moments 0, 1 and 2, respectively. The average return on this security is obviously zero. The returns on the two subperiods total 10 % and −9.09 %, respectively, which gives the following values for the average return: 0 % for the geometric mean and 0.45 % for the arithmetical mean. Generally speaking, the arithmetic mean always overestimates the return, all the more so if ﬂuctuations in partial returns are signiﬁcant. We are, however, more inclined to use

Equities

37

the arithmetic mean because of its simplicity2 and because this type of mean is generally used for statistical estimations,3 and it would be difﬁcult to work with variances and covariances (see below) estimated in any other way. Note We also use another calculation formula when no dividend is paid – that of the logarithmic return. Ct ∗ Rt = ln . Ct−1 This formula differs only slightly from the formula shown above, as it can be developed using the Taylor formula as follows, if the second-degree and higher terms, which are almost always negligible, are not taken into consideration: Rt∗

Ct − Ct−1 = ln 1 + Ct−1

= ln(1 + Rt ) ≈ Rt The advantage of Rt∗ compared to Rt is that: • Only Rt∗ can take values as small as one wishes: if Ct−1 > 0, we have: lim ln

Ct −→0+

Ct Ct−1

= −∞

Ct − Ct−1 ≥ −1 Ct−1 ∗ • Rt allows the variation to be calculated simply over several consecutive periods:

which is compatible with statistical assumption about return, though

Ct ln Ct−2

Ct−1 Ct · = ln Ct−1 Ct−2

Ct = ln Ct−1

Ct−1 + ln Ct−2

which is not possible with Rt . We will, however, be using Rt in our subsequent reasoning. Example Let us calculate in Table 3.1 the quantities Rt and Rt∗ for a few values of Ct . The differences observed are small, and in addition, we have: 11 100 ln = 0.0039 + 0.0271 − 0.0794 − 0.0907 = −0.1391 12 750 2 3

An argument that no longer makes sense with the advent of the computer age. See, for example, the portfolio return shown below.

38

Asset and Risk Management Table 3.1

Classic and logarithmic returns Rt

Rt∗

0.0039 0.0273 −0.0760 −0.0864

0.0039 0.0271 −0.0794 −0.0907

Ct 12 750 12 800 13 150 12 150 11 100

3.1.1.2 Return on a portfolio Let us consider a portfolio consisting of a number N of equities, and note nj , Cj t and Rj t , respectively the number of equities (j ), the price for those equities at the end of period t and the dividend paid on the equity during that period. The total value Vt of the portfolio at the moment t, and the total value Dt of the dividends paid during period t, are therefore given by: Vt =

N

nj Cj t

j =1

Dt =

N

nj Dj t

j =1

The return of the portfolio will therefore be given by: RP ,t =

Vt − Vt−1 + Dt Vt−1 N

=

nj Cj t −

j =1

N

nj Cj,t−1 +

j =1 N

N

nj Dj t

j =1

nk Ck,t−1

k=1 N

=

nj (Cj t − Cj,t−1 + Dj t )

j =1 N

nk Ck,t−1

k=1

=

N j =1

nj Cj,t−1 N

Rj t

nk Ck,t−1

k=1

nj Cj,t−1 The quantity Xj = N represents the portion of the equity (j ) invested in the k=1 nk Ck,t−1 portfolio at the moment t − 1, expressed in terms of equity market capitalisation, and one

Equities

39

thus arrives at Xj = 1. With this notation, the return on the portfolio takes the following form: N RP ,t = Xj Rj t j =1

Note The relations set out above assume, of course, that the number of each of the securities in the portfolio remains unchanged during the period in question. Even if this condition is satisﬁed, the proportions Xj will be dependent on t through the prices. If therefore one wishes to consider a portfolio that has identical proportions at two given different moments, the nj must be altered in consequence. This is very difﬁcult to imagine in practice, because of transaction costs and other factors, and we will not take account of it in future. Instead, our reasoning shall be followed as though the proportions remained unchanged. As for an isolated security, when one considers a return estimated on the basis of several returns relating to the same duration but from different periods, one uses the arithmetical mean instead of the geometric mean, which gives: =

1 RP ,t 12 t=1

=

1 Xj Rj t 12 t=1 j =1

12

RP ,1

month

N

12

=

N

Xj

j =1

1 Rj t 12 t=1 12

Therefore, according to what was stated above:4 RP ,1

month

=

N

Xj Rj,1 month .

j =1

3.1.1.3 Market return From a theoretical point of view, the market can be considered as a portfolio consisting of Nall the securities in circulation. The market return is therefore deﬁned as: RM,t = j =1 Xj Rj t where Xj represents the ratio of global equity market capitalisation of the security (j ) and that of all securities. These ﬁgures are often difﬁcult to process, and in practice, the concept is usually replaced by the concept of a stock exchange index that represents the market in question: It − It−1 RI,t = . It−1 4

Note that this relationship could not have existed if the arithmetical mean was not used.

40

Asset and Risk Management

A statistical index is a parameter that allows a magnitude X between the basic period X(s) . t and the calculation period s to be described as: It (s) = X(t) When X is composite, as for the value of a stock exchange market, several methods of evaluation can be envisaged. It is enough to say that: • • • •

Some relate to prices and others to returns. Some use arithmetic means for prices, others use equity market capitalisation. Some take account of dividends paid, others do not. Others relate to all quoted securities, others are sectorial in nature.

The best known stock exchanges indexes are the Dow Jones (USA), the S&P 500 (USA), the Nikkei (Japan) and the Eurostoxx 50 (Europe). 3.1.1.4 Expected return and ergodic estimator As we indicated above, the return of an equity is a random variable, the distribution of which is usually not fully known. The essential element of this probability law is of course its expectation:5 the expected return Ej = E(Rj ). This is an ex ante mean, which as such is inaccessible. For this reason, it is estimated on the basis of available historical observations, calculated for the last T periods. Such an ex post estimator, which relates to historical data, is termed ergodic. The estimator for the expected return on the security (j ) is therefore: Rj =

T 1 Rj t T t=1

In the same way,for a portfolio, the expected return equals: t EP = E(RP ) = N j =1 Xj Ej = X E, introducing the X and E vectors for the proportions and expected returns on N securities: E1 X1 E2 X2 E= . X= . .. .. XN EN The associated ergodic estimator is thus given by: RP =

T N 1 RP t = Xj R j . T t=1 j =1

In the following theoretical developments, we will use the probability terms (expectation) although it is acknowledged that for practical calculations, the statistical terms (ergodic estimator) should be used. 5 From here on, we will use the index t not for the random return variable relative to period t, but for referencing a historical observation (the realised value of the random variable).

Equities

41

3.1.1.5 Risk of one equity The performance of an equity cannot be measured on the basis of its expected return only. Account should also be taken of the magnitude of ﬂuctuations of this return around its mean value, as this magnitude is a measurement of the risk associated with the security in question. The magnitude of variations in a variable around its average is measured using dispersion indices. Those that are adopted here are the variance σj2 and the standard deviation σj of the return: σj2 = var(Rj ) = E[(Rj − Ej )2 ] = E(Rj2 ) − Ej2 In practice, this is evaluated using its ergodic estimator: sj2 =

T T 1 1 2 2 (Rj t − R j )2 = R − Rj T t=1 T t=1 j t

Note Two typical values are currently known for the return on an equity: its (expected) return and its risk. With regard to the distribution of this random variable, if it is possible to accept a normal distribution, then no other parameter will be needed as the law of probability is characterised by its average and its standard deviation. The reason for the omnipresence of this distribution is the central limit theorem (CLT), which requires the variable in question to be the sum of a very large number of ‘small’ independent effects. This is probably the reason why (number of transactions) it is being noted empirically that returns relating to long periods (a month or a year) are often normally distributed, while this is not necessarily the case for daily returns, for example. In these cases, we generally observe distributions with fatter tails6 than those under the normal law. We will examine this phenomenon further in Part III, as value at risk is particularly interested in these distribution tails. However, we will consider in this part that the distribution of the return is characterised by the ‘expected return-risk’ couple, which is sufﬁcient for the Markowitz portfolio theory.7 In other cases (dynamic models), it will be supposed in addition that this is normal. Other dispersion indices could be used for measuring risk, as mean deviation E(|Rj − Ej |) or semi-variance, which is deﬁned as the variance but takes account only of those return values that are less than the expected return. It is nevertheless the variance (and its equivalent, the standard deviation) that is almost always used, because of its probability-related and statistical properties, as will be seen in the deﬁnition of portfolio risk. 3.1.1.6 Covariance and correlation The risk of a portfolio depends of course on the risk of the securities of which it is composed, but also on the links present between the various securities, through the effect 6 7

This is referred to as leptokurtic distribution. Markowitz H., Portfolio selection, Journal of Finance, Vol. 7, No. 1, 1952, pp. 419–33.

42

Asset and Risk Management

of diversiﬁcation. The linear dependence between the return of the security (i) and its security (j ) is measured by the covariance: σij = cov(Ri , Rj ) = E(Ri − Ei )(Rj − Ej ) = E(Ri Rj ) − Ei Ej This is evaluated by the ergodic estimator sij =

T T 1 1 (Rit − R i )(Rj t − R j ) = (Rit Rj t ) − R i R j T t=1 T t=1

The interpretation of the covariance sign is well known, but its order of magnitude is difﬁcult to express. To avoid this problem, we use the correlation coefﬁcient ρij = corr(Ri , Rj ) =

σij σi · σj

For this coefﬁcient, the ergodic estimator is of course given by rij =

sij si · sj

Remember that this last parameter is a pure number located between −1 and 1, of which the sign indicates the way of dependency between the two variables and the values close to ±1 correspond to near-perfect linear relations between the variables. 3.1.1.7 Portfolio risk

If one remembers that RP ,t = N j =1 Xj Rj t , and given that the formula for the variance of a linear combination of random variables, the variance of the return on the portfolio takes the following form: σP2

= var(RP ) =

N N

Xi Xj σij = Xt V X

i=1 j =1

Here: σii = σi2 and one has determined

X1 X2 X= . .. XN

σ12 σ21 V = . .. σN1

σ12 σ22 .. . σN2

· · · σ1N · · · σ2N .. .. . . · · · σN2

If one wishes to show the correlation coefﬁcients, the above formula becomes: σP2 =

N N i=1 j =1

Xi Xj σi σj ρij

Equities

43

Example The risk of a portfolio consisting of two equities in respective proportions, 30 % and 70 %, and such that σ12 = 0.03, σ22 = 0.02, σ12 = 0.01, is calculated regardless by: σP2 = 0.32 · 0.03 + 0.72 · 0.02 + 2 · 0.3 · 0.7 · 0.01 = 0.0167, or by: 0.03 0.01 0.3 2 = 0.0167. σP = 0.3 0.7 0.7 0.01 0.02 It is interesting to compare the portfolio risk with the individual security risk. The ‘expected return-risk’ approach to the portfolio therefore requires a knowledge of the expected returns and individual variances as well as all the covariances two by two. Remember that the multi-normal distribution is characterised by these elements, but that Markowitz’s portfolio theory does not require this law of probability. 3.1.1.8 Security risk within a portfolio The portfolio risk can also be written as: σP2 =

N N

Xi Xj σij =

i=1 j =1

N

Xi

i=1

N

Xj σij

j =1

The total risk for the security (i) within the portfolio therefore depends on σi2 but also on the covariances with other securities in the portfolio. It can be developed as follows: N

Xj σij =

j =1

N j =1

Xj cov(Ri , Rj )

= cov Ri ,

N

Xj Rj

j =1

= cov(Ri , RP ) = σiP The relative importance of the total risk for the security (i) in the portfolio risk is therefore measured by: N Xj σij σiP j =1 = . σP2 σP2 These relative risks are such as: N i=1

Xi

σiP = 1. σP2

44

Asset and Risk Management

Example Using the data in the previous example, the total risks for the two securities within the portfolio are given as:

σ1P = 0.3 · 0.03 + 0.7 · 0.01 = 0.016 σ2P = 0.3 · 0.01 + 0.7 · 0.02 = 0.017

The corresponding relative risks therefore total 0.958 and 1.018 respectively. Note that what we actually have is: 0.3 · 0.958 + 0.7 · 1.018 = 1. The concept of the relative risk applied to the market as a whole or to a particular portfolio leads us to the concept of systematic risk : βi =

σiM 2 σM

It therefore represents the relative importance of the total security risk (i) in the market risk, that is, the volatility of Ri in relation to RM , as the quotient in question is the slope of the regression line in which the return on the security (i) is explained by the return of the market (see Figure 3.1): Ri = αi + βi RM It can be accepted, in conclusion, that the risk of a particular security should never be envisaged in isolation from the rest of the portfolio in which it is included. 3.1.2 Market efﬁciency Here follows a brief summary of the concept of market efﬁciency,8 which is a necessary hypothesis (or one that must be at least veriﬁed approximately) for the validity of the various models of ﬁnancial analysis and is closely linked to the concept of the ‘perfect market’.

Ri • • •

• •

•

•

•

•

RM

Figure 3.1 Systematic risk 8

A fuller treatment of this subject is found in Gillet P., L’Efﬁcience Des March´es Financiers, Economica, 1999.

Equities

45

3.1.2.1 General principles It was Eugene Fama9 who explicitly introduced the concept of ‘efﬁciency’. The deﬁnition that he gave to the concept was as follows: ‘A ﬁnancial market is said to be efﬁcient if, and only if, all the available information on each ﬁnancial asset quoted on the market is immediately included in the price of that asset’. Indeed, he goes so far as to say that there is no overvaluation or undervaluation of securities, and also that no asset can produce a return greater than that which corresponds to its own characteristics. This hypothesis therefore guarantees equality of treatment of various investors: no category of investor has any informational advantage. The information available on this type of market therefore allows optimum allocation of resources. The economic justiﬁcation for this concept is that the various investors, in competition and possessing the same information, will, through their involvement and because of the law of supply and demand, make the price of a security coincide with its intrinsic value. We are of course looking at a hypothesis that divides the supporters of fundamental analysis from the supporters of technical analysis. The former accept the hypothesis and indeed make it the entire basis for their reasoning; they assume that returns on securities are unpredictable variables and propose portfolio management techniques that involve minimising the risks linked to these variables.10 The latter propose methods11 that involve predicting courses on the basis of historically observed movements. From a more mathematical point of view, market efﬁciency consists of assuming that the prices will follow a random walk, that is, that the sequence Ct − Ct−1 (t = 1, 2, . . .) consists of random variables that are independent and identically distributed. In these circumstances, such a variation can only be predicted on the basis of available observations. The economic conditions that deﬁne an efﬁcient market are: • The economic agents involved on the market behave rationally; they use the available information coherently and aim to maximise the expected utility of their wealth. • The information is available simultaneously to all investors and the reaction of the investors to the information is instantaneous. • The information is available free of charge. • There are no transaction costs or taxes on the market. • The market in question is completely liquid. It is obvious that these conditions can never be all strictly satisﬁed in a real market. This therefore raises the question of knowing whether the differences are signiﬁcant and whether they will have the effect of invalidating the efﬁciency hypothesis. This question is addressed in the following paragraphs, and the analysis is carried out at three levels according to the accessibility of information. The least that can be said is that the conclusions of the searches carried out in order to test efﬁciency are inconclusive and should not be used as a basis for forming clear and deﬁnitive ideas. 9 Fama E. F., Behaviour of Stock Market Prices, Journal of Business, Vol. 38, 1965, pp. 34–105. Fama E. F., Random Walks in Stock Market Prices, Financial Analysis Journal, 1965. Fama E. F., Efﬁcient Capital Markets: A Review of Theory and Empirical Work, Journal of Finance, Vol. 25, 1970. 10 This approach is adopted in this work. 11 Refer for example to Bechu T. and Bertrand E., L’Analyse Technique, Economica, 1998.

46

Asset and Risk Management

3.1.2.2 Weak form The weak form of the efﬁciency hypothesis postulates that it is not possible to gain a particular advantage from the range of historical observations; the rates therefore purely and simply include the previous rate values. The tests applied in order to verify this hypothesis relate to the possibility of predicting rates on the basis of their history. Here are a few analyses carried out: • The autocorrelation test. Is there a correlation (positive or negative) between the successive return on security values that allows forecasts to be made? • The run test. Is the distribution of the sequence lengths for positive returns and negative returns normal? • Statistical tests for random walk. • Simulation tests for technical analysis methods. Do the speculation techniques give better results than passive management? Generally speaking, most of these tests lead to acceptance of the weak efﬁciency hypothesis, even though the most demanding tests from the statistical viewpoint sometimes invalidate it. 3.1.2.3 Semi-strong form The semi-strong form of the efﬁciency hypothesis postulates that it is not possible to gain a particular advantage from information made public in relation to securities; the rates therefore change instantly and correctly when an event such as an increase in capital, division of securities, change of dividend policy, balance sheet publication or take-over bid is announced publicly. The tests carried out to verify this hypothesis therefore relate to the effects of the events announced. They consist successively of: • Determining the theoretical return on a security Rit = αi + βi RMt on the basis of historical observations relating to a period that does not include such events. • When such an event occurs, comparing the difference between the theoretical return and the real return. • Measuring the reaction time in order for the values to be altered again. 3.1.2.4 Strong form The strong form of the efﬁciency hypothesis postulates that it is not possible to gain a particular advantage from nonpublic information relating to securities; the rates therefore change instantly and correctly when an event that is not public, that is, an insider event, occurs. The tests carried out to verify this hypothesis therefore relate to the existence of privileged information. They follow a method similar to that used for the semi-strong form, but in speciﬁc circumstances: • In recognised cases of misdemeanour by an initiated person.

Equities

47

• In cases of intensive trading on a market without the public being informed. • In cases of intensive trading on the part of initiated persons. • In cases of portfolios managed by professionals likely to have speciﬁc information before the general public has it, as in collective investment organisations. 3.1.2.5 Observed case of systematic inefﬁciency Although the above analyses suggest that the efﬁciency hypothesis can be globally accepted, cases of systematic inefﬁciency have been discovered. In these cases, the following have sometimes been observed: • Higher than average proﬁtability at the end of the week, month or year. • Higher proﬁtability for low equity market capitalisation businesses than for high capitalisation companies. Alongside these differences, pockets of inefﬁciency allowing arbitrage may present themselves. Their origin may be: • Speculative bubbles, in which the rate of a security differs signiﬁcantly and for a long time from its intrinsic value before eventually coming back to its intrinsic value, without movements of the market economic variables as an explanation for the difference. • Irrational behaviour by certain investors. These various elements, although removed from the efﬁciency hypothesis, do not, however, bring it into question. In addition, the proﬁt to investors wishing to beneﬁt from them will frequently be lost in transaction costs. 3.1.2.6 Conclusion We quote P. Gillet in conclusion of this analysis. Financial market efﬁciency appears to be all of the following: an intellectual abstraction, a myth and an objective. The intellectual abstraction. Revealed by researchers, the theory of ﬁnancial market efﬁciency calls into question a number of practices currently used by the ﬁnancial market professionals, such as technical analysis. (. . .) It suggests a passive management, while technical analysis points towards an active management. (. . .) In addition, it is one of the basic principles of modern ﬁnancial theory. (. . .). The myth. All the hypotheses necessary for accepting the theory of efﬁciency are accepted by the theory’s supporters. In addition to the classic hypotheses on circulation of information or absence of transaction costs, which have been addressed, other underlying hypotheses have as yet been little explored, especially those linked to the behaviour of investments and to liquidity. (. . .). An objective. The market authorities are aware that the characteristics of efﬁciency make the market healthy and more credible, and therefore attract investors and businesses. To make a

48

Asset and Risk Management market more efﬁcient is to reduce the risk of the speculation bubble. (. . .). The aim of the authorities is therefore to improve the efﬁciency of the ﬁnancial markets (. . .).

3.1.3 Equity valuation models The principle of equivalence, the basis of ﬁnancial mathematics, allows the expression that the intrinsic value V0 of an equity at the moment 0 is equal to the discounted values of the future ﬁnancial ﬂows that the security will trigger. Put more simply, if one assumes that the dividends (future ﬁnancial ﬂows) are paid for periods 1, 2 etc. and have a respective total of D1 , D2 etc., and if the discount rate k is included, we will obtain the relation: ∞ V0 = Dt (1 + k)−t t=1

Note 1 The direct use of this relation can be sensitive. In fact: • The value of all future dividends is not generally known. • This formula assumes a constant discount rate (ad inﬁnitum). • It does not allow account to be taken of speciﬁc operations such as division or regrouping of equities, free issues or increases in capital. The formula does, however, provide a number of services and later we will introduce a simpliﬁed formula that can be obtained from it. Note 2 This formula, which links V0 and k, can be used in two ways: • If V0 is known (intrinsic value on an efﬁcient market), the value of k can be deduced from it and will then represent the expected return rate for the security in question. • If k is given, the formula provides an assessment of the security’s value, which can then be compared to the real rate C0 , thus allowing overevaluation or underestimation of the security to be determined. 3.1.3.1 The Gordon–Shapiro formula This relation12 is based on the following hypotheses: • The growth of the ﬁrm is self-ﬁnancing. • The rate of return r of the investments, and the rate of distribution d of the proﬁts, are constant from one period to the next. 12 See Gordon M. and Shapiro E., Capital equipment analysis: the required rate proﬁt, Management Science, Vol. 3, October 1956.

Equities

49

Under these hypotheses, if Bt is ﬁxed as the proﬁt for each action sold during the period t and Et is the accounting value per equity at the moment t (capital divided by number of equities), we have: Dt = d Bt Bt = r · Et−1 And therefore: Bt+1 = Bt + r · (Bt − Dt ) = Bt [1 + r(1 − d)] The proﬁts therefore increase at a constant rate g = r(1 − d), which is the rate of profitability of the investments less the proportion distributed. The dividends also increase at this constant rate and it is possible to write Dt+1 = g.Dt , hence: Dt = D1 (1 + g)t−1 . The present value can therefore be worked out as follows: V0 =

∞

D1 (1 + g)t−1 (1 + k)−t

t=1

=

∞ D1 1 + g t 1 + k t=0 1 + k

D1 = 1+k 1+g 1− 1+k This is provided the discount rate k is greater than the rate of growth g. This leads to the Gordon–Shapiro formula: V0 =

D1 dB1 drE0 = = k−g k−g k−g

Example The capital of a company consists of 50 000 equities, for a total value of 10 000 000. The investment proﬁtability rate is 15 %, the proﬁt distribution rate 40 %, and the discount rate 12 %. The proﬁt per equity will be: B = 0.15 ·

10 000 000 = 30 50 000

The dividend per equity will therefore be D = 0.4 × 30 = 12. In addition, the rate of growth is given as follows: g = 0.15 × (1 − 0.4) = 0.09.

50

Asset and Risk Management

The Gordon–Shapiro formula therefore leads to: V0 =

12 12 = = 400 0.12 − 0.09 0.03

The market value of this company is therefore 50 000 × 400 = 20 000 000, while its accounting value is a mere 10 000 000. D1 , which shows that V0 the return k can be broken down into the dividend growth rate and the rate of payment per security. The Gordon–Shapiro formula produces the equation k = g +

3.1.3.2 The price-earning ratio One of the most commonly used evaluation indicators is the PER. It equals the ratio of the equity rate to the expected net proﬁt for each equity: PER 0 =

C0 B1

Its interpretation is quite clear: when purchasing an equity, one pays PER 0 × ¤1 for a proﬁt of ¤1. Its inverse (proﬁt over price) is often considered as a measurement of returns on securities, and securities whose PER is below the market average are considered to be undervalued and therefore of interest. This indicator can be interpreted using the Gordon–Shapiro formula, if the hypotheses relative to the formula are satisﬁed. In fact, by replacing the rate with the V0 value given for this formula: dB1 D1 C0 = = k−g k − r(1 − d) we arrive directly at: PER 0 =

d k − r(1 − d)

This allows the following expression to be obtained for the rate of return k: d PER 0 1 1−d = r(1 − d) + − PER 0 PER 0

k = r(1 − d) +

As PER 0 = C0 /rE0 , we ﬁnd that: k=

r(1 − d)(C0 − E0 ) 1 + PER 0 C0

Example If one takes the same ﬁgures as in the previous paragraph:

Equities

51

r = 15 % d = 40 % 10 000 000 = 200 E0 = 50 000 and the effectively observed price is 360, we arrive at: PER 0 = This allows the rate of output13 to be determined as follows:

360 = 12. 30

1 0.15 · (1 − 0.4) · (360 − 200) + 12 360 = 0.0833 + 0.04

k=

= 12.33 %

3.2 PORTFOLIO DIVERSIFICATION AND MANAGEMENT 3.2.1 Principles of diversiﬁcation Putting together an optimum equity portfolio involves an answer to the following two questions, given that a list of N equities is available on the market, • Which of these equities should I choose? • In what quantity (number or proportion)? The aim is to look for the portfolio that provides the greatest return. This approach would logically lead to holding a portfolio consisting of just one security, that with the greatest expected return. Unfortunately, it misses out the risk aspect completely and can lead to a catastrophe scenario if the price for the adopted security falls. The correlations between the returns on the various available securities can, on the other hand, help compensate for the ﬂuctuations in the various portfolio components. This, in sharp contrast to the approach described above, can help reduce the portfolio risk without reducing its expected return too much. It is this phenomenon that we will be analysing here and use at a later stage to put together an optimum portfolio. 3.2.1.1 The two-equity portfolio According to what was stated above, the expected return and variance for a two-equity portfolio represented in proportions14 X1 and X2 are given as follows:

EP = X1 E1 + X2 E2 σP2 = X12 σ12 + X22 σ22 + 2X1 X2 σ1 σ2 ρ

13 14

Of course, if the rate had been equal to the intrinsic value V0 = 400, we arrive at k = 12 %. It is implicitly supposed in this paragraph that the proportions are between 0 and 1, that is to say, there are no short sales.

52

Asset and Risk Management

In order to show clearly the effect of diversiﬁcation (the impact of correlation on risk), let us ﬁrst consider the case in which the two securities have the same expected return (E1 = E2 = E) and the same risk (σ1 = σ2 = σ ). Since X1 + X2 = 1, the equations will become: EP = E σP2 = (X12 + X22 + 2X1 X2 ρ)σ 2 The expected return on the portfolio is equal to that on the securities, but the risk is lower because the maximum value that it can take corresponds to ρ = 1 for which σP = σ and when ρ < 1, σP < σ . Note that in the case of a perfect negative correlation (ρ = −1), the risk can be written as σP2 = (X1 − X2 )2 σ 2 . This cancels itself out if one chooses X1 = X2 = 1/2; in this case, the expected return is retained but the risk is completely cancelled. Let us now envisage the more general case in which the expected return and the risk is of whatever quantity. An equity is characterised by a couple (Ei , σi ) for i = 1 or 2 and can therefore be represented as a point in space (E, σ ); of course the same applied for the portfolio, which corresponds to the point (EP , σP ). Depending on the values given to X1 (and therefore to X2 ), the representative point for the portfolio will describe a curve in (E, σ ) plane. Let us now study in brief the shape of the curve with respect to the values for the correlation coefﬁcient ρ. When ρ = 1, the portfolio variance15 becomes σP2 = (X1 σ1 + X2 σ2 )2 . By eliminating X1 and X2 from the three equations EP = X1 E1 + X2 E2 σP = X1 σ1 + X2 σ2 X1 + X2 = 1 we arrive at the relation σP =

EP − E2 E1 − EP σ1 + σ2 E1 − E2 E1 − E2

This expresses σP as a function of EP , a ﬁrst-degree function, and the full range of portfolios is therefore the sector of the straight line that links the representative points for the two securities (see Figure 3.2). E

E • (2)

(1) •

• (1)

• (2) σ

Figure 3.2 Two-equity portfolio (ρ = 1 case) 15

Strictly speaking, one should say ‘the portfolio return variance’.

σ

Equities

53

Faced with the situation shown on the left, the investor will choose a portfolio located on the sector according to his attitude to the matter of risk: portfolio (1) will give a low expected return but present little risk, while portfolio (2) is the precise opposite. Faced with a situation shown on the right-hand graph, there is no room for doubting that portfolio (2) is better than portfolio (1) in terms of both expected return and risk incurred. When ρ = −1, the variance in the portfolio will be: σP2 = (X1 σ1 − X2 σ2 )2 . In other words, σP = |X1 σ1 − X2 σ2 |. Applying the same reasoning as above leads to the following conclusion: the portfolios that can be constructed make up two sectors of a straight line from points (1) and (2), meet together at a point on the vertical axis (σ = 0), and have equal slopes, excepted the sign (see Figure 3.3). Of these portfolios, of course, only those located in the upper sector will be of interest; those in the lower sector will be less attractive from the point of view of both risk and expected return. In the general case, −1 < ρ < 1, and it can be shown that all the portfolios that can be put together form a curved arc that links points (1) and (2) located between the extreme case graphs for ρ = ±1, as shown in Figure 3.4. If one expresses σP2 as a function of EP , as was done in the ρ = 1 case, a seconddegree function is obtained. The curve obtained in the (E, σ ) plane will therefore be a hyperbolic branch. The term efﬁcient portfolio is applied to a portfolio that is included among those that can be put together with two equities and cannot be improved from the double viewpoint of risk and expected return. Graphically, we are looking at portfolios located above contact point A16 of the vertical tangent to the portfolio curve. In fact, between A and (2), it is not possible to improve E • (2)

• (1)

σ

Figure 3.3 Two-equity portfolio (ρ = −1 case) E • (2)

A • (1) σ

Figure 3.4 Two-equity portfolio (general case) 16

This contact point corresponds to the minimum risk portfolio.

54

Asset and Risk Management

EP without increasing the risk or to decrease σP without reducing the expected return. In addition, any portfolio located on the arc that links A and (1) will be less good than the portfolios located to its left. 3.2.1.2 Portfolio with more than two equities A portfolio consisting of three equities17 can be considered as a mixture of one of the securities and a portfolio consisting of the two others. For example, a portfolio with the composition X1 = 0.5, X2 = 0.2 and X3 = 0.3 can also be considered to consist of security (1) and a portfolio that itself consists of securities (2) and (3) at rates of 40 % and 60 % respectively. Therefore, for the ﬁxed covariances σ12 , σ13 and σ23 , the full range of portfolios that can be constructed using this process corresponds to a continuous range of curves as shown in Figure 3.5. All the portfolios that can be put together using three or more securities therefore form an area within the plane (E, σ ). The concept of ‘efﬁcient portfolio’ is deﬁned in the same way as for two securities. The full range of efﬁcient portfolios is therefore the part of the boundary of this area limited by security (1) and the contact point of the vertical tangent to the area, corresponding to the minimum risk portfolio. This arc curve is known as the efﬁcient frontier. The last part of this Section 3.2 is given over to the various techniques used to determine the efﬁcient frontier, according to various restrictions and hypotheses. An investor’s choice of a portfolio on the efﬁcient frontier will be made according to his attitude to risk. If he adopts the most cautious approach, he will choose the portfolio located at the extreme left point of the efﬁcient frontier (the least risky portfolio, very diversiﬁed), while a taste for risk will move him towards the portfolios located on the right part of the efﬁcient frontier (acceptance of increased risk with hope of higher return, generally obtained in portfolios made up of a very few proﬁtable but highly volatile securities).18

E (1)

(2)

(3) σ

Figure 3.5 Three-equity portfolio 17 The passage from two to three shares is a general one: the results obtained are valid for N securities. The attached CD-ROM shows some more realistic examples of the various models in the Excel sheets contained in the ‘Ch 3’ directory. 18 This question is examined further in Section 3.2.6.

Equities

55

3.2.2 Diversiﬁcation and portfolio size We have just seen that diversiﬁcation has the effect of reducing the risk posed by a portfolio through the presence of various securities that are not perfectly correlated. Let us now examine the limits of this diversiﬁcation; up to what point, for a given correlation structure, can diversiﬁcation reduce the risk? 3.2.2.1 Mathematical formulation To simplify the analysis, let us consider a portfolio of N securities in equal proportions: Xj =

1 N

j = 1, . . . , N

The portfolio risk can therefore be developed as: σP2 =

N N

Xi Xj σij

i=1 j =1

N N N 1 2 = 2 σi + σij N j =1 i=1

i=1

j =i

This double sum contains N (N − 1) terms, and it is therefore natural to deﬁne the average variance and the average covariance as: N 1 2 var = σ N i=1 i N

cov =

N

1 σij N (N − 1) i=1 j =1 j =i

As soon as N reaches a sufﬁcient magnitude, these two quantities will almost cease to depend on N . They will then allow the portfolio variance to be written as follows: σP2 =

1 N −1 var + cov N N

3.2.2.2 Asymptotic behaviour When N becomes very large, the ﬁrst term will decrease back towards 0 while the second, now quite stable, converges towards cov. The portfolio risk, despite being very diversiﬁed, never falls below this last value, which corresponds to: N −1 1 2 cov = lim var + cov = lim σP2 = σM N−→∞ N N−→∞ N In other words, it corresponds to the market risk.

56

Asset and Risk Management sP2

cov

N

Figure 3.6 Diversiﬁcation and portfolio size

The behaviour of the portfolio variance can be represented according to the number of securities by the graph shown in Figure 3.6. The effects of diversiﬁcation are initially very rapid (the ﬁrst term loses 80 % of its value if the number of securities increases from 1 to 5) but stabilise quickly somewhere near the cov value. 3.2.3 Markowitz model and critical line algorithm 3.2.3.1 First formulation The efﬁcient frontier is the ‘North-West’ part of the curve, consisting of portfolios deﬁned by this principle: for each ﬁxed value r of EP , the proportions for which σP2 is minimal Xj (j = 1, . . . , N ) are determined. The efﬁcient frontier is this deﬁned by giving r all the possible values. Mathematically, the problem is therefore presented as a search for the minimum with respect to X1 , . . . , XN of the function: σP2 =

N N

Xi Xj σij

i=1 j =1

under the double restriction:

N Xj Ej = r j =1 N

Xj = 1

j =1

The Lagrangian function19 for the problem can thus be written as: L(X1 , . . . , XN ; m1 , m2 ) =

N N

N N Xi Xj σij + m1 · X j E j − r + m2 · Xj − 1

i=1 j =1

19

Please refer to Appendix 1 for the theory of extrema.

j =1

j =1

Equities

57

Taking partial derivatives with respect to the variables X1 , . . . , Xn and to the Lagrange multipliers m1 and m2 leads to the system of N + 2 equations with N + 2 unknowns: N L (X1 , . . . , XN ; m1 , m2 ) = 2 Xi σij + m1 Ej + m2 = 0 (j = 1, . . . , N ) Xj i=1 N Xi Ei − r = 0 L m1 (X1 , . . . , XN ; m1 , m2 ) = i=1 N L (X , . . . , X ; m , m ) = Xi − 1 = 0 1 N 1 2 m2 i=1

This can be written in a 2σ12 2σ21 . .. 2σ N1 E1 1

matrix form: 2σ12 2σ22 .. . 2σN2 E2 1

· · · 2σ1N · · · 2σ2N .. .. . . · · · 2σN2 · · · EN ··· 1

E1 E2 .. . EN . .

1 . X1 X2 . 1 . . .. . . . . =. 1 XN . r m1 . 1 m2 .

By referring to the matrix of coefﬁcients,20 the vector of unknowns21 and the vector of second members as M, X∗ and G respectively, we give the system the form MX∗ = G. The resolution of this system passes through the inverse matrix of M:X∗ = M −1 G. Note 1 In reality, this vector only supplies one stationary point of the Lagrangian function; it can be shown (although we will not do this here) that it constitutes the solution to the problem of minimisation that is concerning us. Note 2 This relation must be applied to the different possible values for r to ﬁnd the frontier, of which only the efﬁcient (‘North-West’) part will be retained. The interesting aspect of this result is that if r is actually inside the vector G, it does not appear in the matrix M, which then has to be inverted only once.22 Example We now determine the efﬁcient frontier that can be constructed with three securities with the following characteristics: E1 = 0.05 σ1 = 0.10 ρ12 = 0.3

E2 = 0.08 σ2 = 0.12 ρ13 = 0.1

E3 = 0.10 σ3 = 0.15 ρ23 = 0.4

In its order N zone of the upper left corner, this contains the 2V matrix in which V is the variance–covariance matrix. The vector of unknowns does not contain the proportions only; it also involves the Lagrange multipliers (which will not be of use to us later). For this reason we will use the notation X∗ instead of X (which is reserved for the vector of proportions). This remark applies to all the various models developed subsequently. 22 The attached CD-ROM contains a series of more realistic examples of the various models in an Excel ﬁle known as Ch 3. 20 21

58

Asset and Risk Management

The variance–covariance matrix is given by: 0.0100 0.0036 V = 0.0036 0.0144 0.0015 0.0072 The matrix M is therefore equal to: 0.0200 0.0072 0.0072 0.0288 M = 0.0030 0.0144 0.05 0.08 1 1 This matrix inverts to:

0.0015 0.0072 0.0225

0.0030 0.05 1 0.0144 0.08 1 0.0450 0.10 1 0.010 . . 1 . .

31.16 −24.10 − 7.06 0.57 −24.10 40.86 −16.76 0.24 M −1 = − 7.06 −16.76 23.82 0.19 0.57 0.24 0.19 −0.01 .. . By applying this matrix to the vector G = r , for different values of r, we ﬁnd a 1 range of vectors X∗ , the ﬁrst three components of which supply the composition of the portfolios (see Table 3.2). These proportions allow σP to be calculated23 for the various portfolios (Table 3.3). It is therefore possible, from this information, to construct the representative curve for these portfolios (Figure 3.7). Table 3.2

Composition of portfolios

r 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 0.11 0.12 0.13 0.14 0.15 23

X1

X2

X3

2.1293 1.8956 1.6620 1.4283 1.1946 0.9609 0.7272 0.4935 0.2598 0.0262 −0.2075 −0.4412 −0.6749 −0.9086 −1.1423 −1.3759

−0.3233 −0.2391 −0.1549 −0.0707 0.0135 0.0933 0.1820 0.2662 0.3504 0.4346 0.5188 0.6030 0.6872 0.7714 0.8556 0.9398

−0.8060 −0.6565 −0.5071 −0.3576 −0.2801 −0.0586 0.0908 0.2403 0.3898 0.5392 0.6887 0.8382 0.9877 1.1371 1.2866 1.4361

The expected return is of course known.

Equities Table 3.3

59

Calculation of σP

EP

σP

0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 0.11 0.12 0.13 0.14 0.15

0.2348 0.2043 0.1746 0.1465 0.1207 0.0994 0.0857 0.0835 0.0937 0.1130 0.1376 0.1651 0.1943 0.2245 0.2554 0.2868

0.16 0.14 Expected return

0.12 0.1 0.08 0.06 0.04 0.02 0

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

Standard deviation

Figure 3.7 Efﬁcient frontier

The efﬁcient part of this frontier is therefore the ‘North-West’ part, the lower limit of which corresponds to the minimum risk portfolio. For this portfolio, we have values of EP = 0.0667 and σP = 0.0828. The method just presented does not require the proportions to be positive. Moreover, a look at the preceding diagram will show that negative values (and values over 1) are sometimes obtained, as the ‘classic’ portfolios (0 ≤ Xj ≤ 1 for any j ) correspond only to expected return values between 0.06 and 0.09. A negative value for a proportion corresponds to a short sale. This type of transaction, which is very hazardous, is not always authorised, especially in the management of investment funds. Symmetrically, a proportion of over 1 indicates the purchase of a security for an amount greater than the total invested. In addition, many portfolios contain regulatory or internal restrictions stating that certain types of security cannot be represented for a total over a ﬁxed percentage. In this case, the problem must be resolved by putting together portfolios in which proportions of the

60

Asset and Risk Management

type Bj− ≤ Xj ≤ Bj+ for j = 1, . . . , N are subject to regulations. We will examine this problem at a later stage. 3.2.3.2 Reformulating the problem We now continue to examine the problem without any regulations on inequality of proportions. We have simply altered the approach slightly; it will supply the same solution but can be generalised more easily into the various models subsequently envisaged. If instead of representing the portfolios graphically by showing σP as the x-axis and Ep as the y-axis (as in Figure 3.7), EP is now shown as the x-axis and σP2 as the y-axis, the efﬁcient frontier graph now appears as shown in Figure 3.8. A straight line in this graph has the equation σ 2 = a + λE in which a represents the intercept and λ the slope of the straight line. We are looking speciﬁcally at a straight line at a tangent to the efﬁcient frontier. If the slope of this straight line is zero (λ = 0), the contact point of the tangent shows the least risky portfolio in the efﬁcient frontier. Conversely, the more λ increases, the further the contact point moves away from the efﬁcient frontier towards the risky portfolios. The λ parameter may vary from 0 to +∞ and is therefore representative of the portfolio risk corresponding to the contact point of the tangent with this λ value for a slope. For a ﬁxed λ value, the tangent to the efﬁcient frontier with slope λ is, of all the straight lines with that slope and with at least one point in common with the efﬁcient frontier, that which is located farthest to the right, that is, the one with the smallest coordinate at the origin a = σ 2 − λE. The problem is therefore reformulated as follows: for the various values of λ between 0 and ∞, minimise with respect to the proportions X1 , . . . , XN the expression: σP2 − λEP =

N N i=1 j =1

Xi Xj σij − λ

N

Xj Ej

j =1

under the restriction N j =1 Xj = 1. Once the solution, which will depend on λ, has been found, it will be sufﬁcient to make this last parameter vary between 0 and +∞ to arrive at the efﬁcient frontier.

sP2

EP a

Figure 3.8 Reformulation of problem

Equities

61

The Lagrangian function for the problem can be written as: L(X1 , . . . , XN ; m) =

N N

Xi Xj σij − λ

i=1 j =1

N

Xj Ej + m ·

j =1

N

Xj − 1

j =1

A reasoning similar to that used in the ﬁrst formulation allows the following matrix expression to be deduced from the partial derivatives: MX∗ = λE ∗ + F Here, it has been noted that24 2σ12 2σ12 · · · 2σ1N 2σ21 2σ 2 · · · 2σ2N 2 .. .. .. M = ... . . . 2σ · · · 2σN2 N1 2σN2 1 1 ··· 1

1 1 .. . 1 .

X1 X2 X∗ = ... XN m

E1 E2 E ∗ = ... EN .

. . F = ... . 1

The solution to this system of equations is therefore supplied by: X∗ = λ(M −1 E ∗ ) + (M −1 F ). As for the ﬁrst formulation, the matrix M is independent of the parameter λ, which must be variable; it only needs to be inverted once. Example Let us take the same data as those used in the ﬁrst formulation, namely: E1 = 0.05 σ1 = 0.10 ρ12 = 0.3

E2 = 0.08 σ2 = 0.12 ρ13 = 0.1

The same variance–covariance matrix V expressed as: 0.0200 0.0072 M= 0.0030 1

E3 = 0.10 σ3 = 0.15 ρ23 = 0.4

as above will be used, and the matrix M can be 0.0072 0.0288 0.0144 1

0.0030 1 0.0144 1 0.0450 1 1 .

This matrix inverts to: 31.16 −24.10 − 7.06 0.57 −24.10 40.86 −16.76 0.24 = − 7.06 −16.76 23.82 0.19 0.57 0.24 0.19 −0.01

M −1

24 In the same way as the function carried out for X∗ , we are using the E ∗ notation here as E is reserved for the N-dimensional vector for the expected returns.

62

Asset and Risk Management Table 3.4

Solutions for different values of λ

λ 2.0 1.9 1.8 1.7 1.6 1.5 1.4 1.3 1.2 1.1 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0

X1

X2

X3

EP

σP

−1.5810 −1.4734 −1.3657 −1.2581 −1.1505 −1.0429 −0.9353 −0.8276 −0.7200 −0.6124 −0.5048 −0.3972 −0.2895 −0.1819 −0.0743 0.0333 0.1409 0.2486 0.3562 0.4638 0.5714

1.0137 0.9750 0.9362 0.8974 0.8586 0.8198 0.7810 0.7423 0.7035 0.6647 0.6259 0.5871 0.5484 0.5096 0.4708 0.4320 0.3932 0.3544 0.3157 0.2769 0.2381

1.5672 1.4984 1.4296 1.3607 1.2919 1.2231 1.1542 1.0854 1.0165 0.9477 0.8789 0.8100 0.7412 0.6723 0.6035 0.5437 0.4658 0.3970 0.3282 0.2593 0.1905

0.0588 0.1542 0.1496 0.1450 0.1404 0.1357 0.1311 0.1265 0.1219 0.1173 0.1127 0.1081 0.1035 0.0989 0.0943 0.0897 0.0851 0.0805 0.0759 0.0713 0.0667

0.3146 0.3000 0.2854 0.2709 0.2565 0.2422 0.2280 0.2139 0.2000 0.1863 0.1729 0.1597 0.1470 0.1347 0.1231 0.1123 0.1027 0.0945 0.0882 0.0842 0.0828

· 0.05 0.08 · , the solutions F = As the vectors E ∗ and F are given by E ∗ = · 0.10 1 . to the problem for the different values of λ are shown in Table 3.4. The efﬁcient frontier graph then takes the form shown in Figure 3.9. The advantage of this new formulation is twofold. On one hand, it only shows the truly efﬁcient portfolios instead of the boundary for the range of portfolios that can be put together, from which the upper part has to be selected. On the other hand, it readily

0.18 0.16 Expected return

0.14 0.12 0.1 0.08 0.06 0.04 0.02 0

0

0.05

0.1

0.15

0.2

Standard deviation

Figure 3.9 Efﬁcient frontier for the reformulated problem

0.25

0.3

0.35

Equities

63

lends itself to generalisation in the event of problems with inequality restrictions, as well as to the simple index models with non-risk titles.

3.2.3.3 Constrained Markowitz model The problem to be solved is formulated here as follows: for the different values of λ between 0 and +∞, minimise with respect to the proportions X1 , . . ., XN the expression

σP2 − λEP =

N N

Xi Xj σij − λ

i=1 j =1

with the restrictions:

N j =1 Xj − Bj ≤ Xj

=1 ≤ Bj+

N

Xj Ej

j =1

j = 1, . . . , N

We will ﬁrst of all introduce the concept of a security’s ‘status’. The security (j ) is deﬁned as ‘down’ (resp. ‘up’) if its proportion is equal to the ‘lower’ (resp. ‘upper’) bound imposed on it: Xj = Bj− (resp. Xj = Bj+ ). For an efﬁcient portfolio (that is, one that minimises the Lagrangian function), the partial derivative of the Lagrangian function with respect to Xj is not zero in an optimum situation; it is strictly positive (resp. strictly negative) as can be seen in Figure 3.10. In the system of equations produced by the partial derivatives of the Lagrangian function, the equations relating to the ‘down’ (resp. ‘up’) securities should therefore be replaced here by Xj = Bj− (resp. Xj = Bj+ ). The other securities are deﬁned as ‘in’, and are such that Bj− < Xj < Bj+ , and in an optimum situation, the partial derivative of the Lagrangian function with respect to Xj is zero. The equations relating to these securities should not be altered. The adaptation to the system of equations produced by the partial derivatives of the Lagrangian function MX∗ = λE ∗ + F , will therefore consist of not altering the components that correspond to the ‘in’ securities, and if the security (j ) is ‘down’ or

L

L

–

Bj

+

Bj

Xj

Figure 3.10 ‘Up’ security and ‘down’ security

–

Bj

+

Bj

Xj

64

Asset and Risk Management

‘up’, of altering the j th line of M and the 2σ12 · · · 2σ1j · · · 2σ1N .. .. .. .. . . . . 0 · · · 1 · · · 0 M= . .. .. .. . ··· . 2σ · · · 2σNj · · · 2σN2 N1 1 ··· 1 ··· 1

j th component of E ∗ and F , as follows: 0 1 E1 .. .. .. . . . ± B 0 ∗ j 0 E F = = . .. .. .. . . 0 EN 1 0 0 1

With this alteration, in fact, the j th equation becomes Xj = Bj± . In addition, when considering the j th line of the equality M −1 M = I , it is evident that M −1 has the same jth line as M and the j th component of the solution X∗ = λ(M −1 E ∗ ) + (M −1 F ). This is also written as Xj = Bj± . If (j ) has an ‘in’ status, this j th component can of course be written as Xj = λuj + vj , a quantity that is strictly included between Bj− and Bj+ . The method proceeds through a series of stages and we will note M0 , E0∗ and F0 , the matrix elements as deﬁned in the ‘unconstrained’ case. The index develops from one stage to the next. The method begins with the major values for λ (+∞ ideally). As we are looking to minimise σP2 − λEP , EP needs to be as high as possible, and this is consistent with a major value for the risk parameter λ. The ﬁrst portfolio will therefore consist of the securities that offer the highest expected returns, in equal proportions to the upper bounds Bj+ , until (with securities in proportions equal to Bj− ) the sum of the proportions equals 1.25 This portfolio is known as the ﬁrst corner portfolio. At least one security will therefore be ‘up’; one will be ‘in’, and the others will be ‘down’. The matrix M and the vectors E ∗ and F are altered as shown above. This brings us to M1 , E1∗ and F1 , and we calculate: X∗ = λ (M1 −1 E1∗ ) + (M1 −1 F1 ). The parameter λ is thus decreased until one of the securities changes its status.26 This ﬁrst change will occur for a value of λ equal to λc(1) , known as the ﬁrst critical λ. To determine this critical value, and the security that will change its status, each of the various securities for which a potentially critical λj is deﬁned will be examined. A ‘down’ or ‘up’ security (j ) will change its status if the equation corresponding to it becomes LXj = 0, that is: 2

N

Xk σj k − λj Ej + m = 0

k=1

This is none other than the j th component of the equation M0 X∗ = λE0∗ + F0 , in which the different Xk and m are given by the values obtained by X∗ = λ (M1 −1 E1∗ ) + (M1 −1 F1 ). 25 If the inequality restrictions are simply 0 ≤ Xj ≤ 1 ∀j (absence of short sales), the ﬁrst portfolio will consist only of the security with the highest expected return. 26 For the restrictions 0 ≤ Xj ≤ 1 ∀j , the ﬁrst corner portfolio consists of a single ‘up’ security, all the others being ‘down’. The ﬁrst change of status will be a transition to ‘in’ of the security that was ‘up’ and of one of the securities that were ‘down’. In this case, on one hand the matrix elements M1 , E1∗ and F1 are obtained by making the alteration required for the ‘down’ securities but for the one that it is known will pass to ‘in’ status, and on the other hand there is no equation for determining the potential critical λ for this security.

Equities

65

For an ‘in’ security (j ), it is known that Xj = λj uj + vj and it will change its status if it becomes a ‘down’ (uj > 0 as λ decreases) or ‘up’ security (uj < 0), in which case we have Bj± = λj uj + vj . This is none other than the j th component of the relation X∗ = λ(M1−1 E1∗ ) + (M1−1 F1 ), in which the left member is replaced by the lower or upper bound depending on the case. We therefore obtain N equations for N values of potentially critical λj . The highest of these is the ﬁrst critical λj or λc(1) . The proportions of the various securities have not changed between λ = +∞ and λ = λc(1) . The corresponding portfolio is therefore always the ﬁrst corner portfolio. The security corresponding to this critical λ therefore changes its status, thus allowing M2 , E2∗ and F2 to be constructed and the second critical λ, λc(2) , to be determined together with all the portfolios that correspond to the values of λ between λc(1) and λc(2) . The portfolio corresponding to λc(2) is of course the second corner portfolio. The process is then repeated until all the potentially critical λ values are negative, in which case the last critical λ is equal to 0. The last and least risky corner portfolio, located at the extreme left point of the efﬁcient frontier, corresponds to this value. The corner portfolios are of course situated on the efﬁcient frontier. Between two consecutive corner portfolios, the status of the securities does not change; only the proportions change. These proportions are calculated, between λc(k−1) and λ(k) c , using the relation X∗ = λ(Mk−1 Ek∗ ) + (Mk−1 Fk ). The various sections of curve thus constructed are connected continuously and with same derivative27 and make up the efﬁcient frontier. Example Let us take the same data as were processed before: E1 = 0.05 σ1 = 0.10 ρ12 = 0.3

E2 = 0.08 σ2 = 0.12 ρ13 = 0.1

E3 = 0.10 σ3 = 0.15 ρ23 = 0.4

Let us impose the requirement of absence of short sales: 0 ≤ Xj ≤ 1 (j = 1, 2, 3). We have the following basic matrix elements: 0.0200 0.0072 0.0030 1 0.05 . 0.0072 0.0288 0.0144 1 0.08 ∗ . M0 = 0.0030 0.0144 0.0450 1 E0 = 0.10 F0 = . 1 1 1 1 . . The ﬁrst corner portfolio consists only of security (3), the one with the highest expected return. As securities (1) and (2) are ‘down’, we construct: 1 . . . . . . . . 1 . . ∗ M1 = 0.0030 0.0144 0.0450 1 E1 = 0.10 F1 = . 1 1 1 . . 1 27

That is, with the same tangent.

66

Asset and Risk Management

We have: M1−1

1 . . 1 = −1 −1 0.0420 0.0306

. . . . . 1 1 −0.0450

and therefore . . . . X∗ = λ(M1−1 E1∗ ) + (M1−1 F1 ) = λ . + 1 −0.045 0.1

The ﬁrst two components of M0 X∗ = λE0∗ and F0 , with the vector X∗ obtained above, give: 0.003 + (0.1 λ1 − 0.045) = 0.05 λ λ1 0.0144 + (0.1 λ2 − 0.045) = 0.08 λ2 This will give the two potential critical λ values: λ1 = 0.84 and λ2 = 1.53. The ﬁrst critical λ is therefore λc(1) = 1.53 and security (2) becomes ‘in’ together with (3), while (1) remains ‘down’. We can therefore construct: 1 . . . . . 0.0072 0.0288 0.0144 1 0.08 . ∗ E2 = F2 = M2 = 0.0030 0.0144 0.0450 1 0.10 . 1 1 1 . . 1 This successively gives: 1 . . . −0.7733 22.22 −22.22 0.68 M2−1 = −0.2267 −22.22 22.22 0.32 0.0183 0.68 0.32 −0.0242 . . −0.4444 0.68 X∗ = λ(M2−1 E2∗ ) + (M2−1 F2 ) = λ 0.4444 + 0.32 −0.0242 0.0864 The ﬁrst component of M0 X∗ = λE0∗ + F0 , with vector X∗ obtained above, gives: 0.0072 · (−0.4444λ1 + 0.68) + 0.0030 · (0.4444λ1 + 0.32) + (0.0864λ1 − 0.0242) = 0.05λ1 . This produces a potential critical λ of λ1 = 0.5312. The second and third components of the relation X∗ = λ(M2−1 E2∗ ) + (M2−1 F2 ), in which the left member is replaced by the suitable bound, produce −0.4444 λ2 + 0.68 = 1 0.4444 λ3 + 0.32 = 0

Equities

67

In consequence, λ2 = λ3 = −0.7201. The second critical λ is therefore λc(2) = 0.5312 and the three securities acquire an ‘in’ status. The matrix elements M3 , E3∗ and F3 are therefore the same as those in the base and the problem can be approached without restriction. We therefore have: 31.16 −24.10 −7.06 0.57 −24.10 40.86 −16.76 0.24 M3−1 = − 7.06 −16.76 23.82 0.19 0.57 0.24 0.19 −0.01 and therefore 0.5714 −1.0762 0.3878 0.2381 X∗ = λ(M3−1 E3∗ ) + (M3−1 F3 ) = λ 0.6884 + 0.1905 −0.0137 0.0667

With suitable bounds, the ﬁrst three components of this give: −1.0762 λ1 etc. We therefore arrive at λ1 = −0.3983, λ2 = −0.6140 and λ3 = −0.2767. The last critical λ is therefore λc(3) = 0 and the three securities retain their ‘in’ status until the end of the process.28 The various portfolios on the efﬁcient frontier, as well as the expected return and the risk, are shown in Table 3.5. Of course, between λ = 0.5312 and λ = 0, the proportions obtained here are the same as those obtained in the ‘unrestricted’ model as all the securities are ‘in’. The efﬁcient frontier graph therefore takes the form shown in Figure 3.11. Table 3.5 Solution for constrained Markowitz model λ 1.53 1.5 1.4 1.3 1.2 1.1 1.0 0.9 0.8 0.7 0.6 0.5312 0.5 0.4 0.3 0.2 0.1 0.0 28

X1

X2

X3

EP

σP

0 0 0 0 0 0 0 0 0 0 0 0 0.0333 0.1409 0.2486 0.3562 0.4638 0.5714

0 0.0133 0.0578 0.1022 0.1467 0.1911 0.2356 0.2800 0.3244 0.3689 0.4133 0.4439 0.4320 0.3932 0.3544 0.3157 0.2769 0.2381

1 0.9867 0.9422 0.8978 0.8533 0.8089 0.7644 0.7200 0.6756 0.6311 0.5867 0.5561 0.5347 0.4658 0.3970 0.3282 0.2853 0.1905

0.1000 0.0997 0.0988 0.0980 0.0971 0.0962 0.0953 0.0944 0.0935 0.0926 0.0917 0.0911 0.0897 0.0851 0.0805 0.0759 0.0713 0.0687

0.1500 0.1486 0.1442 0.1400 0.1360 0.1322 0.1286 0.1253 0.1222 0.1195 0.1170 0.1155 0.1123 0.1027 0.0945 0.0882 0.0842 0.0828

It is quite logical to have signiﬁcant diversiﬁcation in the least risk-efﬁcient portfolio.

68

Asset and Risk Management 0.12

Expected return

0.1 0.08 0.06 0.04 0.02 0

0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

Standard deviation

Figure 3.11 Efﬁcient frontier for the constrained Markowitz model 0.14

Expected return

0.12 0.1 0.08 0.06 0.04 0.02 0

0

0.05

0.1

0.15

0.2

0.25

Standard deviation

Figure 3.12 Comparison of unconstrained and constrained efﬁcient frontiers

Figure 3.12 superimposes the two efﬁcient frontiers (constrained and unconstrained). The zones corresponding to the short sales, and those in which all the securities are ‘in’, can be clearly seen. 3.2.3.4 Critical line algorithm H. Markowitz has proposed an algorithmic method for resolving the problem with the restrictions Xj ≥ 0 (j = 1, . . . , N ). It is known as the critical line algorithm. This algorithm starts with the ﬁrst corner portfolio, which of course consists of the single security with the highest expected return. It then passes through the successive corner portfolios by testing, at each stage, the changes in the function to be minimised when: • A new security is introduced into the portfolio. • A security is taken out of the portfolio. • A security in the portfolio is replaced by one that was not previously present. The development of the algorithm is outside the scope of this work and is instead covered in specialist literature.29 Here, we will simply show the route taken by a three-security problem 29

For example Markowitz, H., Mean Variance Analysis in Portfolio Choice and Capital Markets, Basil Blackwell, 1987.

Equities

69

X3 A

C

B

X2

X1

Figure 3.13 Critical line

3

j =1 Xj = 1 j = 1, 2, 3 0 ≤ Xj ≤ 1 deﬁne, in a three-dimensional space, a triangle with points referenced (1, 0, 0) (0, 1, 0) and (0, 0, 1) as shown in Figure 3.13. The critical line is represented in bold and points AB and C correspond to the corner portfolios obtained for λ = λc(1) , λc(2) and λc(3) respectively. In this algorithm, only the corner portfolios are determined. Those that are located between two consecutive corner portfolios are estimated as linear combinations of the corner portfolios.

such as the one illustrated in this section. The restrictions

3.2.4 Sharpe’s simple index model 3.2.4.1 Principles Determining the efﬁcient frontier within the Markowitz model is not an easy process. In addition, the amount of data required is substantial as the variance–covariance matrix is needed. For this reason, W. Sharpe30 has proposed a simpliﬁed version of Markowitz’s model based on the following two hypotheses. 1. The returns of the various securities are expressed as ﬁrst-degree functions of the return of a market-representative index: Rj t = aj + bj RI t + εj t j = 1, . . . , N . It is also assumed that the residuals verify the classical hypotheses of linear regression,31 which are, among others, that the residuals have zero expectation and are not correlated to the explanatory variable RI t . 2. The residuals for the regressions relative to the various securities are not correlated: cov (εit , εj t ) = 0 for all different i and j . By applying the convention of omitting the index t, the return on a portfolio will therefore be written, in this case, as RP =

N

Xj Rj

j =1 30 31

Sharpe W., A simpliﬁed model for portfolio analysis, Management Science, Vol. 9, No. 1, 1963, pp. 277–93. See Appendix 3 on this subject.

70

Asset and Risk Management

=

N

Xj (aj + bj RI + εj )

j =1

=

N

Xj aj +

j =1

=

N

N

Xj bj RI +

j =1

N

Xj εj

j =1

Xj aj + Y RI +

j =1

N

Xj εj

j =1

where we have inserted Y = Xj bj . The expected return and portfolio variance can, on the basis of the hypotheses in the model, be written EP =

N

Xj aj + Y EI

j =1

σP2 =

N j =1

Xj2 σε2j + Y 2 σI2

Note 1 The variance of the portfolio can be written as a matrix using a quadratic form: σP2 = ( X1

···

XN

Y )

σε21

.

..

. σε2N

X1 . .. XN Y σI2 .

Because of the structure of this matrix, the simple index model is also known as a diagonal model. However, contrary to the impression the term may give, the simpliﬁcation is not excessive. It is not assumed that the returns from the various securities will not be correlated, as σij = cov(ai + bi RI + εi , aj + bj RI + εj ) = bi bj σI2 Note 2 In practice, the aj and bj coefﬁcients for the various regressions are estimated using the least squares method: aˆ j and bˆj . The residuals are estimated using the relation εˆ j t = Rj t − (aˆ j + bˆj RI t )

Equities

71

On the basis of these estimations, the residual variances will be determined using their ergodic estimator. 3.2.4.2 Simple index model We therefore have to resolve the following problem: for the different values of λ between 0 and +∞, minimise the following expression with respect to the proportions X1 , . . . , XN and the variable Y : N N σP2 − λEP = Xj2 σε2j + Y 2 σI2 − λ · Xj aj + Y EI j =1

j =1

N Xj bj = Y

with the restrictions

j =1

N Xj = 1 j =1

The Lagrangian function for the problem is written as: L(X1 , . . . , XN , Y ; m1 , m2 ) =

N j =1

Xj2 σε2j + Y 2 σI2 − λ ·

+ m1 ·

N j =1

N j =1

Xj aj + Y EI

Xj bj − Y + m2 ·

N

Xj − 1

j =1

Calculation of the partial derivatives of this lagrangian function leads to the equality MX∗ = λE ∗ + F , where we have: 2 2σε1 . . b1 1 X1 . .. .. .. .. .. . . . . 2 . . bN 1 2σεN X∗ = XN M= . 2 ··· . 2σI −1 . Y b m1 · · · bN −1 . . 1 m2 1 ··· 1 . . . . a1 . . .. .. aN ∗ F =. E = . EI . . 1 .

72

Asset and Risk Management

The solution for this system is written as: X∗ = λ(M −1 E ∗ ) + (M −1 F ). Example Let us take the same data as those used in the ﬁrst formulation, namely:32 E1 = 0.05 σ1 = 0.10 ρ12 = 0.3

E2 = 0.08 σ2 = 0.12 ρ13 = 0.1

E3 = 0.10 σ3 = 0.15 ρ23 = 0.4

Let us then suppose that the regression relations given by: R1 = 0.014 + 0.60RI R2 = −0.020 + 1.08RI R3 = 0.200 + 1.32RI

and the estimated residual variances are (σε21 = 0.0060) (σε22 = 0.0040) (σε23 = 0.0012)

Let us also suppose that the expected return and index variance represent respectively EI = 0.04 and σI2 = 0.0045. These data allow us to write: 0.0120 . . . 0.60 1 0.0080 . . 1.08 1 . . 0.0024 . 1.32 1 . M= . . 0.0090 −1 . . 0.60 1.08 1.32 −1 . . 1 1 1 . . . . 0.014 . −0.020 . 0.200 F = . E∗ = . 0.040 . . 1 . We can therefore calculate:

−7.46 −18.32 25.79 M −1 E ∗ = 9.77 0.05 0.07

0.513 0.295 0.192 M −1 F = 0.880 0.008 −0.011

The portfolios for the different values of λ are shown in Table 3.6. The efﬁcient frontier is represented in Figure 3.14. We should point out that although the efﬁcient frontier has the same appearance as in Markowitz’s model, there is no need to compare the proportions here as the regression equations that have been relied upon are arbitrary and do not arise from an effective analysis of the relation between the returns on securities and the returns on the index. 32 These values are clearly not necessary to determine the proportions using Sharpe’s model (in addition, one reason for this was to avoid the need to calculate the variance–covariance matrix). We will use them here only to calculate the efﬁcient frontier.

Equities Table 3.6 λ 0.100 0.095 0.090 0.085 0.080 0.075 0.070 0.065 0.060 0.055 0.050 0.045 0.040 0.035 0.030 0.025 0.020 0.015 0.010 0.005 0.000

73

Solution for Sharpe’s simple index model X1

X2

X3

−0.2332 −0.1958 −0.1585 −0.1212 −0.0839 −0.0465 −0.0092 0.0281 0.0654 0.1028 0.1401 0.1774 0.2147 0.2521 0.2894 0.3267 0.3640 0.4014 0.4387 0.4760 0.5133

−1.5375 −1.4458 −1.3542 −1.2626 −1.1710 −1.0793 −0.9877 −0.8961 −0.8045 −0.7129 −0.6212 −0.5296 −0.4380 −0.3464 −0.2547 −0.1631 −0.0715 0.0201 0.1118 0.2034 0.2950

2.7706 2.6417 2.5127 2.3838 2.2548 2.1259 1.9969 1.8680 1.7390 1.6101 1.4812 1.3522 1.2233 1.0943 0.9654 0.8364 0.7075 0.5785 0.4496 0.3206 0.1917

EP 0.1424 0.1387 0.1350 0.1313 0.1276 0.1239 0.1202 0.1165 0.1128 0.1091 0.1054 0.1017 0.0980 0.0943 0.0906 0.0869 0.0832 0.0795 0.0758 0.0721 0.0684

σP 0.3829 0.3647 0.3465 0.3284 0.3104 0.2924 0.2746 0.2568 0.2392 0.2218 0.2046 0.1877 0.1711 0.1551 0.1397 0.1252 0.1119 0.1003 0.0912 0.0853 0.0832

0.16 0.14 Expected return

0.12 0.1 0.08 0.06 0.04 0.02 0

0

0.05

0.1

0.15

0.2 0.25 0.3 Standard deviation

0.35

0.4

0.45

Figure 3.14 Efﬁcient frontier for Sharpe’s simple index model

Note 1 The saving in data required, compared to Markowitz’s model, is considerable: in the last model the expected returns, variances and covariances (two by two) corresponds N (N − 1) N (N + 3) to N + N + = , while in the simple index model we need only 2 2 the regression coefﬁcients and residual variances as well as the expected return and the variance in the index, namely: 2N + N + 2 = 3N + 2. For example, on a market on which there is a choice between 100 securities, the number of items of information required is 5150 in the ﬁrst case and just 302 in the second. Note 2 If, in addition to the restrictions envisaged above, the inequality restrictions Bj− ≤ Xj ≤ Bj+ j = 1, . . . , N are imposed, the simple model index can still be used by applying the same

74

Asset and Risk Management

principles as for Markowitz’s model (alteration of matrix elements according to the ‘down’, ‘in’ and ‘up’ status of the various securities, calculation of critical λ and corner portfolios). 3.2.4.3 Multi-index model One criticism that can be made of the simple index model is that the behaviour of every security is made according to just one index. Probably a more consistent way of proceeding is to divide all the market securities into sectors and express the return on each security in the same sector as a ﬁrst-degree function of the return on a sectorial index. The general method for writing this model is heavy and complex. We will be showing it in relation to two sectors, the ﬁrst corresponding to securities j = 1, . . . , N1 and the second to j = N1 + 1, . . . , N1 + N2 = N . The sectorial indices will be noted as I1 and I2 respectively. The regression equations take the form: Rj t = aj + bj RI1 t + εj t j = 1, . . . , N1 Rj t = aj + bj RI2 t + εj t j = N1 + 1, . . . , N1 + N2 = N The return on the portfolio, and its expected return and variance, are shown as RP =

N

Xj Rj

j =1

=

N

Xj aj + Y1 RI1 + Y2 RI2 +

j =1

EP =

N

N

Xj εj

j =1

Xj aj + Y1 EI1 + Y2 EI2

j =1

σP2 =

N j =1

Xj2 σε2j + Y12 σI21 + Y22 σI22 + 2Y1 Y2 σI1 I2

= ( X1

···

XN

Y1

Y2 )

σε21

..

. σε2N

. Here, we have introduced the parameters N1 = Xj bj Y 1 j =1

N Y = Xj bj 2 j =N1 +1

X1 . . . XN σI1 I2 Y1 Y2 σI2 .

σI21 σI2 I1

2

Equities

The usual reasoning leads once again into X∗ = λ(M −1 E ∗ ) + (M −1 F ), with 2 2σε1 .. . 2σε2N 1 2σε2N +1 1 M= . . ··· . . . · · · . . b1 · · · bN . 1 . ··· . bN1 +1 1 ··· 1 1 X1 a1 . . .. .. X a N N X ∗ = Y1 E ∗ = EI1 Y2 EI 2 m1 . m2 . . m3

75

to the relation: MX∗ = λE ∗ + F . This resolves the notations: . . . b1 . 1 .. .. .. .. .. . . . . . . . bN1 . 1 . . . bN1 +1 1 .. .. .. .. .. .. . . . . . . 2 . . . bN 1 2σεN ··· . 2σI21 2σI1 I2 −1 . . . −1 . ··· . 2σI2 I1 2σI22 ··· . −1 . . . . · · · bN . −1 . . . ··· 1 . . . . . . . .. . F =. . . . 1

It should be noted that compared to the simple index model, the two-index model requires only three additional items of information: expected return, variance and covariance for the second index. 3.2.5 Model with risk-free security 3.2.5.1 Modelling and resolution Let us now examine the case in which the portfolio consists of a certain number N of equities (of returns R1 , . . . , RN ) in proportions X1 , . . . , XN and a risk-free security with a return of RF that is in proportion XN+1 with X1 + . . . + XN + XN+1 = 1. This risk-free security is seen as a hypothesis formulated as follows. The investor has the possibility of investing or loaning (XN+1 > 0) or of borrowing (XN+1 < 0) funds at the same rate RF . Alongside the returns on equities, which are the random variables that we looked at in previous paragraphs (with their expected returns Ej and their variance–covariance matrix V ), the return on the risk-free security is a degenerated random variable: EN+1 = RF 2 σN+1 =0 σj,N+1 = 0 (j = 1, . . . , N )

76

Asset and Risk Management

Note We will now study the effect of the presence of a risk-free security in the portfolio on the basis of Markowitz’s model without inequality restriction. We can easily adapt the presentation to cover Sharpe’s model, or take account of the inequality restrictions. The result in relation to the shape of the efﬁciency curve (see below) is valid in all cases and only one presentation is necessary. The return on the portfolio is written as RP = X1 R1 + . . . + XN RN + XN+1 RF . This allows the expected return and variance to be calculated: N = Xj Ej + XN+1 RF E P j =1

N N 2 σ = Xi Xj σij P i=1 j =1

We must therefore solve the problem, for the different values of λ between 0 and +∞, of minimisation with respect to the proportions X1 , . . . , XN and XN+1 of the expression σP2 − λEP , under the restriction: N

Xj + XN+1 = 1

j =1

The Lagrangian function for this problem can be written as N N N Xi Xj σij − λ · Xj Ej + XN+1 RF L(X1 , . . . , XN , XN+1 ; m) = i=1 j =1

+m·

j =1 N

Xj + XN+1 − 1

j =1

Calculation of its partial derivatives leads to the system of equations MX∗ = λE ∗ + F , where we have: 2σ12 2σ12 · · · 2σ1N . 1 X1 2σ21 2σ 2 · · · 2σ2N . 1 X2 2 . . .. .. .. .. .. . .. ∗ . . . . . X = . M= 2σ 2 N1 2σN2 · · · 2σN . 1 XN . XN+1 . ··· . . 1 m 1 1 ··· 1 1 . E1 . E2 . . . . F = E∗ = . . EN . RF 1 .

Equities

77

The solution for this system is of course written as: X∗ = λ(M −1 E ∗ ) + (M −1 F ). Example Let us take the same data as those used in the ﬁrst formulation, namely: E1 = 0.05 σ1 = 0.10 ρ12 = 0.3

E2 = 0.08 σ2 = 0.12 ρ13 = 0.1

E3 = 0.10 σ3 = 0.15 ρ23 = 0.4

Let us suppose that the risk-free interest rate is RF = 0.03. We therefore have: 0.0200 0.0072 0.0030 . 1 0.05 0.0072 0.0288 0.0144 . 1 0.08 E ∗ = 0.10 M = 0.0030 0.0144 0.0450 . 1 . . . . 1 0.03 . 1 1 1 1 . and therefore:

0.452 1.024 M −1 E ∗ = 1.198 −2.674 0.030

. . F =. . 1

. . M −1 F = . 1 .

This leads to the portfolios shown in Table 3.7. Table 3.7 Solution for model with risk-free security λ 2.0 1.9 1.8 1.7 1.6 1.5 1.4 1.3 1.2 1.1 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0

X1

X2

X3

0.9031 0.8580 0.8128 0.7677 0.7225 0.6774 0.6322 0.5870 0.5419 0.4967 0.4516 0.4064 0.3613 0.3161 0.2709 0.2258 0.1806 0.1355 0.0903 0.0452 0.0000

2.0488 1.9464 1.8439 1.7415 1.6390 1.5366 1.4342 1.3317 1.2293 1.1268 1.0244 0.9220 0.8195 0.7171 0.6146 0.5122 0.4098 0.3073 0.2049 0.1024 0.0000

2.3953 2.2755 2.1558 2.0360 1.9162 1.7965 1.6767 1.5569 1.4372 1.3174 1.1976 1.0779 0.9581 0.8384 0.7186 0.5988 0.4791 0.3593 0.2395 0.1198 0.0000

X(RF ) −4.3472 −4.0799 −3.8125 −3.5451 −3.2778 −3.0104 −2.7431 −2.4757 −2.2083 −1.9410 −1.6736 −1.4063 −1.1389 −0.8715 −0.6042 −0.3368 −0.0694 0.1979 0.4653 0.7326 1.0000

EP

σP

0.3182 0.3038 0.2894 0.2749 0.2605 0.2461 0.2317 0.2173 0.2029 0.1885 0.1741 0.1597 0.1453 0.1309 0.1165 0.1020 0.0876 0.0732 0.0588 0.0444 0.0300

0.5368 0.5100 0.4831 0.4563 0.4295 0.4026 0.3758 0.3489 0.3221 0.2952 0.2684 0.2416 0.2147 0.1879 0.1610 0.1342 0.1074 0.0805 0.0537 0.0268 0.0000

78

Asset and Risk Management 0.35

Expected return

0.3 0.25 0.2 0.15 0.1 0.05 0

0

0.1

0.2

0.3 0.4 Standard deviation

0.5

0.6

0.5

0.6

Figure 3.15 Efﬁcient frontier for model with risk-free security 0.35

Expected return

0.3 0.25 0.2 0.15 0.1 0.05 0

0

0.1

0.2

0.3 0.4 Standard deviation

Figure 3.16 Comparison of efﬁcient frontiers with and without risk-free security

The efﬁcient frontier is shown in Figure 3.15. If the efﬁcient frontier obtained above and the frontier obtained using Markowitz’s model (without risk-free security) are superimposed, Figure 3.16 is obtained. 3.2.5.2 Efﬁcient frontier The graphic phenomenon that appears in the previous example is general. In fact, a portfolio consisting of N securities and the risk-free security can be considered to consist of the risk-free security in the proportion X = XN+1 and a portfolio of equities with a proportion of 1 − X, and the return R (of parameters E and σ ). The return of the risk-free security has a zero variance and is not correlated with the equity portfolio. The parameters for the portfolio are given by EP = XRF + (1 − X)E σP2 = (1 − X)2 σ 2 which gives, after X has been eliminated: EP = RF ± σP

E − RF σ

Equities EP

79

(X ≤ 1)

RF

(X ≥ 1) σP

Figure 3.17 Portfolios with risk-free security EP

A

RF σP

Figure 3.18 Efﬁcient frontier with risk-free security present

following that X ≤ 1 or X ≥ 1. The equations for these straight lines show that the portfolios in question are located on two semi-straight lines with the same slope, with the opposite sign (see Figure 3.17). The lower semi-straight line (X ≥ 1) corresponds to a situation in which the portfolio of equities is sold at a short price in order to invest more in the risk-free security. From now on, we will be interested in the upper part. If the efﬁcient frontier consisting only of equities is known, the optimum semi-straight line, which maximises EP for a given σP , is the line located the highest, that is, the tangent on the efﬁcient frontier of the equities (see Figure 3.18). The portfolios located between the vertical axis and the contact point A are characterised by 0 ≤ X ≤ 1, and those beyond A are such that X ≤ 0 (borrowing at rate RF to invest further in contact portfolio A). 3.2.6 The Elton, Gruber and Padberg method of portfolio management The Elton, Gruber and Padberg or EGP method33 was developed34 to supply a quick and coherent solution to the problem of optimising portfolios. Instead of determining 33 Or more precisely, methods; in fact, various models have been developed around a general idea according to the hypotheses laid down. 34 Elton E., Gruber M. and Padberg M., Simple criteria for optimal portfolio selection, Journal of Finance, Vol. XI, No. 5, 1976, pp, 1341–57.

80

Asset and Risk Management

the efﬁcient frontier as in Markowitz’s or Sharpe’s models, this new technique simply determines the portfolio that corresponds to the contact point of the tangent with the efﬁcient frontier, produced by the point (0, RF ). 3.2.6.1 Hypotheses The method now being examined assumes that: • The mean–variance approach is relevant, which will allow a certain number of results from Markowitz’s theory to be used. • There is a risk-free asset with a return indicated as RF . Alongside these general hypotheses, Elton, Gruber and Padberg have developed resolution algorithms in two speciﬁc cases: • Constant correlations. In this ﬁrst model, it is assumed that the correlation coefﬁcients for the returns on the various securities are all equal: ρij = ρ ∀i, j . • Sharpe’s simple index model can be used. The ﬁrst of these two simpliﬁcations is quite harsh and as such not greatly realistic, and we will instead concentrate on the second case. Remember that it is based on the following two conditions. 1. The returns on the various securities are expressed as ﬁrst-degree functions of the return on a market-representative index: Rj t = aj + bj RI t = εj t . j = 1, . . ., N . It is also assumed that the residuals verify the classic hypotheses of linear regression, including the hypothesis that the residuals have zero-expected return and are not correlated with the explanatory variable Rit . 2. The residuals of the regressions relative to the various securities are not correlated: cov (εit , εj t ) = 0 for all the different i and j values. 3.2.6.2 Resolution of case in which short sales are authorised First of all, we will carry out a detailed analysis of a case in which the proportions are not subject to inequality restrictions. Here, the reasoning is more straightforward35 than in cases where short sales are prohibited. Nevertheless, as will be seen (but without demonstration), applying the algorithm is scarcely any more complex in the second case. If one considers a portfolio P consisting solely of equities in proportions X1 , X2 , . . ., XN , the full range of portfolios consisting partly of P and partly of risk-free securities Elton E., Gruber M. and Padberg M., Optimal portfolios from simple ranking devices, Journal of Portfolio Management, Vol. 4, No. 3, 1978, pp. 15–19. Elton E., Gruber M. and Padberg M., Simple criteria for optimal portfolio selection; tracing out the efﬁcient frontier, Journal of Finance, Vol. XIII No. 1, 1978, pp. 296–302. Elton E., Gruber M. and Padberg M., Simple criteria for optimal portfolio selection with upper bounds, Operation Research, 1978. Readers are also advised to read Elton E. and Gruber M., Modern Portfolio Theory and Investment Analysis, John Wiley & Sons, Inc, 1991. 35 In addition, it starts in the same way as the demonstration of the CAPM equation (see §3.3.1).

Equities

81

EP

A

P

RF σP

Figure 3.19 EGP method

RF shall make up the straight line linking the points (0, RF ) and (σP , EP ) as illustrated in Figure 3.19. EP − RF , which may be The slope of the straight line in question is given by P = σP interpreted as a risk premium, as will be seen in Section 3.3.1. According to the reasoning set out in the previous paragraph, the ideal portfolio P corresponds to the contact point A of the tangent to the efﬁcient frontier coming from the point (0, RF ) for which the slope is the maximum. We are therefore looking for proportions that maximise the slope P or, which amounts to the same thing, maximise P2 . Such as: N N N Xj Ej − Xj RF = Xj (Ej − RF ) EP − RF = j =1

j =1

N N 2 Xi Xj σij σP =

j =1

i=1 j =1

the derivative of:

P2 =

(EP − RF ) = σP2 2

N

2 Xj (Ej − RF )

j =1 N N

Xi Xj σij

i=1 j =1

with respect to Xk is given by: 2 N N N 2 Xj (Ej − RF ) (Ek − RF ) · σP2 − Xj (Ej − RF ) · 2 Xj σkj

( P2 )Xk =

j =1

j =1

σP4 2(EP − RF )(Ek − RF )σP2 − 2(EP − RF )2

=

N j =1

σP4

Xj σkj

j =1

82

Asset and Risk Management

=

N

2(EP − RF ) · (Ek − RF ) − γ · Xj σkj σP2 j =1

In which we have provisionally γ = (Ep − RF )/σP2 . This derivative will be zero if: Ek − RF = γ ·

N

Xj σkj

j =1

By introducing Zj = γ · Xj (j = 1, . . ., N ), the system to be resolved with respect to Z1 , . . ., ZN is therefore Ek − RF =

N

Zj σkj

k = 1, . . . , N

j =1

Before proceeding with the resolution, note that ﬁnding the Zk quantities allows the Xk quantities to be found, as Xk =

Zk = γ

Zk γ·

N

= Xj

j =1

Zk N

Zj

j =1

The hypotheses from Sharpe’s model allow the following to be written: σkj = cov(ak + bk RI + εk , aj + bj RI + εj ) 2 σεk si j = k = bk bj σI2 + 0 si j = k The k th equation in the system can then be written: N Zj bj σI2 + Zk σε2 Ek − RF = bk j =1

k

or also, by resolving with respect to Zk : N 1 Zj bj σI2 Zk = 2 (Ek − RF ) − bk σε k j =1 N bk Zj bj σI2 = 2 θk − σε k j =1

where we have: θk =

Ek − RF bk

Equities

83

All that now remains is to determine the sum between the brackets. On the basis of the last result, we ﬁnd: N N N bk2 σI2 Zk bk = − Z b θ k j j σ2 k=1 k=1 εk j =1 N N N b2 bk2 k = θ − Zj bj σI2 2 k 2 σ σ k=1 εk k=1 εk j =1 the resolution of which gives N bk2 θk N σ2 k=1 εk Zj bj = N b2 j =1 k σI2 1+ 2 σ k=1 εk

By introducing the new notation N bk2 θk N σε2k k=1 Zj bj · σI2 = · σI2 φ= N 2 b j =1 k σI2 1+ 2 σ k=1 εk

and by substituting the sum just calculated within the expression of Zk , we ﬁnd Zk =

bk (θk − φ) k = 1, . . . , N σε2k

Example Let us take the same data as those used in the simple index model (only essential data mentioned here). E1 = 0.05

E2 = 0.08

E3 = 0.10

with the regression relations and the estimated residual variances: R1 = 0.014 + 0.60RI R2 = −0.020 + 1.08RI R3 = 0.200 + 1.32RI

2 (σε1 = 0.0060) 2 (σε2 = 0.0040) 2 (σε3 = 0.0012)

Assume that the variance of the index is equal to σI2 = 0.0045. Finally, assume also that as for the model with the risk-free security, this last value is RF = 0.03. These data allow calculation of: θ1 = 0.0333

θ2 = 0.0463

θ3 = 0.0530.

84

Asset and Risk Management

Therefore, φ = 0.0457. The Zk values are deduced: Z1 = −1.2327

Z2 = 0.1717

Z3 = 8.1068

The proportions of the optimum portfolio are therefore deduced: X1 = −0.1750

X2 = 0.0244

X3 = 1.1506

3.2.6.3 Resolution of case in which short sales are prohibited Let us now examine cases in which restrictions are introduced. These are less general than those envisaged in Markowitz’s model, and are written simply as 0 ≤ Xj ≤ 1(j = 1, . . . , N ). The method, which we are showing here without supporting calculations, is very similar to that used for cases in which short sales are authorised. As above, the following are calculated: Ek − RF θk = k = 1, . . . , N bk The securities are then sorted in decreasing order of θk and this order is preserved until the end of the algorithm. Instead of having just one parameter φ, one parameter is calculated for each security: k bj2 j =1

φk = 1+

θj

σε2j

k

bj2 σ2 j =1 εj

· σI2

k = 1, . . . , N

σI2

It can be shown that the sequence of φk numbers ﬁrst increases, then passes through a maximum and ﬁnally ends with a decreasing phase. The value K of the k index corresponding to the maximum φk , is noted. The φK number is named the ‘cut-off rate’ and it can be shown that the calculation of the Zk values for the same relation as before (replacing φ by φK ) produces positive values for k = 1, . . . , K and negative values for k = K + 1, . . . , N . Only the ﬁrst K securities are included in the portfolio. The calculations to be made are therefore: bk Zk = 2 (θk − φK ) k = 1, . . . , K σε k This, for the proportions of integrated K securities, gives: Xk =

Zk K

k = 1, . . . , K

Zj

j =1

Example Let us take the same data as above. Of course, we still have: θ1 = 0.0333

θ2 = 0.0463

θ3 = 0.0530

Equities

85

This allows the securities to be classiﬁed in the order (3), (2), (1). We will provisionally renumber the securities in this new order, thus producing: φ1 = 0.04599

φ2 = 0.04604

φ3 = 0.04566

This shows that K = 2 and the cut-off rate is φ2 = 0.04604. The Zk values will therefore be deduced: Z1 = 7.6929 Z2 = 0.0701 The proportions of the optimum portfolio are therefore deduced: X1 = 0.9910

X2 = 0.0090

If one then reverts to the initial order, the securities to be included in the portfolio shall therefore be securities (2) and (3) with the following relative proportions: X2 = 0.0090

X3 = 0.9910

3.2.7 Utility theory and optimal portfolio selection Once the efﬁcient frontier has been determined, the question that faces the investor is that of choosing from all the efﬁcient portfolios the one that best suits him. The portfolio chosen will differ from one investor to another, and the choice made will depend on his attitude and behaviour towards the risk. The efﬁcient frontier, in fact, contains as many prudent portfolios (low expected return and risk, located at the left end of the curve) as more risky portfolios (higher expected return and risk, located towards the right end). 3.2.7.1 Utility function The concept of utility function can be introduced generally36 to represent from an individual person’s viewpoint the utility and interest that he ﬁnds in a project, investment, strategy etc., the elements in question presenting a certain level of risk. The numerical values of this risk function are of little importance, as it is essentially used to compare projects, investments, strategies etc. Here, we will present the theory of utility in the context of its application to a return (which, remember, is random) of, for example, a portfolio of equities. Because of the presence of the risk, it is evident that we cannot be content with taking E(R) as utility of return U (R). This was clearly shown by D. Bernoulli in 1732 through the ‘St Petersburg paradox’. The question is: How much would you be prepared to stake to participate in the next game? I toss a coin a number of times and I give you two $ if tails comes up on the ﬁrst throw, four $ if tails comes up for the ﬁrst time on the second throw, eight $ if tails appears for the ﬁrst time on the third throw, and so on. I will therefore give you 2n $ if tails comes up for the ﬁrst time on the nth throw. Most people would lay down a small sum (at least two $), but would be reluctant to invest more because of the increased risk in the game. A player who put down 20 $ would have a 36 An excellent presentation on the general concepts of behaviour in the face of risk (not necessarily ﬁnancial) and the concept of ‘utility’ is found in Eeckhoudt L. and Gollier C., Risk, Harvester Wheatsheaf, 1995.

86

Asset and Risk Management

probability of losing of 1/2 + 1/4 + 1/8 + 1/16 = 15/16 = 0.9375, and would therefore only win on 6.25 stakes out of every 100. The average gain in the game, however, is ∞ n=1

2n

n 1 = 1 + 1 + 1 + ... = ∞ 2

It is the aversion to the risk that justiﬁes the decision of the player. The aim of the utility function is to represent this attitude. In utility theory, one compares projects, investments, strategies etc. (in our case, returns) through a relation of preference (R1 is preferable to R2 : R1 > R2 ) and a relation of indifference (indifference between R1 and R2 : R1 ∼ R2 ). The behaviour of the investor can be expressed if these two relations obey the following axioms: • (Comparability): The investor can always compare two returns. ∀R1 , R2 . We always have R1 > R2 , R2 < R1 , or R1 ∼ R2 . • (Reﬂexivity): ∀R R ∼ R. • (Transitivity): ∀R1 , R2 , R3 , if R1 > R2 and R2 > R3 , then R1 > R3 . • (Continuity): ∀R1 , R2 , R3 , if R1 > R2 > R3 , there is a single X ∈ [0; 1] such as [X.R1 + (1 − X).R3 ] ∼ R2 . • (Independence): ∀R1 , R2 , R3 and ∀X ∈ [0; 1], if R1 > R2 , then [X.R1 + (1 − X).R3 ] > [X.R2 + (1 − X).R3 ]. Von Neumann and Morgenstern37 have demonstrated a theorem of expected utility, which states that if the preferences of an investor obey the axioms set out above, there is a function U so that ∀R1 , R2 , R1 > R2 ⇔ E[U (R1 )] > E[U (R2 )]. This utility function is clearly a growing function. We have noted that its numerical values are not essential as it is only used to make comparisons of returns. The theorem of expected utility allows this concept to be deﬁned more accurately: if an investor’s preferences are modelled by the utility function U , there will be the same system of preferences based on the function aU + b with a > 0. In fact, if R1 > R2 is expressed as E[U (R1 )] > E[U (R2 )], we have: E[U ∗ (R1 )] = E[aU (R1 ) + b] = aE[U (R1 )] + b > aE[U (R2 )] + b = E[aU (R2 ) + b] = E[U ∗ (R2 )] The utility function is an element that is intrinsically associated with each investor (and is also likely to evolve with time and depending on circumstances). It is not easy or indeed even very useful to know this function. If one wishes to estimate it approximately, one has to deﬁne a list of possible values R1 < R2 < . . . < Rn for the return, and then for i = 2, . . . , n − 1, ask the investor what is the probability of it being indifferent to obtain 37

Von Neumann J. and Morgenstern O., Theory of Games and Economic Behaviour, Princeton University Press, 1947.

Equities

87

a deﬁnite return Ri or play in a lottery that gives returns of R1 and Rn with the respective probabilities (1 − pi ) and pi . If one chooses arbitrarily U (R1 ) = 0 and U (Rn ) = 100, then U (Ri ) = 100 pi (i = 2, . . . , n − 1). 3.2.7.2 Attitude towards risk For most investors, an increase in return of 0.5 % would be of greater interest if the current return is 2 % than if it is 5 %. This type of attitude is called risk aversion. The opposite attitude is known as taste for risk, and the middle line is termed risk neutrality. How do these behaviour patterns show in relation to utility function? Let us examine the case of aversion. Generally, if one wishes to state that the utility of return U (R) must increase with R and give less weight to the same variations in return when the level of return is high, we will have: R1 < R2 ⇒ U (R1 + R) − U (R1 ) > U (R2 + R) − U (R2 ). This shows the decreasing nature of the marginal utility. In this case, the derivative of the utility function is a decreasing function and the second derivative is therefore negative; the utility function is concave. The results obtained from these considerations are summarised in Table 3.8, and a representation of the utility function in the various cases is shown in Figure 3.20. Let us now deﬁne this concept more precisely. We consider an investor who has a choice between a certain return totalling R on one hand and a lottery that gives him a random return that may have two values (R − r) and (R + r), each with a probability of 1/2. If he shows an aversion to risk, the utility of the certain return will exceed the expected utility of the return on the lottery: U (R) > 12 [U (R − r) + U (R + r)] This is shown in graphic form in Figure 3.21. Table 3.8

Attitude to risk

Risk aversion Risk neutrality Taste for risk

U(R)

Marginal utility

U

U

Decreasing Increasing Increasing

0

Concave Linear Convex

Aversion Neutrality Taste

R

Figure 3.20 Utility function

88

Asset and Risk Management U(x) U(R + r) U(R) 1 [U(R – r) + U(R + r)] 2

U(R – r) p

R–r

R′

R

R+r

x

Figure 3.21 Aversion to risk

This ﬁgure shows R , the certain return, for which the utility is equal to the expected return on the lottery. The difference p = R − R represents the price that the investor is willing to pay to avoid having to participate in the lottery; this is known as the risk premium. Taylor expansions for U (R + r), U (R − r) and U(R ) = U(R − p) readily lead to the relation: U (R) r 2 · p=− U (R) 2 The ﬁrst factor in this expression is the absolute risk aversion coefﬁcient: α(R) = −

U (R) U (R)

The two most frequently used examples of the utility function corresponding to the risk aversion are the exponential function and the quadratic function. If U (R) = a.ebR , with a and b < 0, we will have α(R) = −b. If U (R) = aR 2 + bR + c, with a < 0 and b < 0, we of course have to limit ourselves to values for R that do not exceed −b/2a in order for the utility function to remain an increasing function. The absolute risk aversion coefﬁcient is then given by: α(R) =

1 b − −R 2a

When this last form can be accepted for the utility function, we have another justiﬁcation for deﬁning the distribution of returns by the two parameters of mean and variance alone, without adding a normality hypothesis (see Section 3.1.1). In this case, in fact, the expected utility of a return on a portfolio (the quantity that the investor wishes to

Equities

89

optimise) is given as: E[U (RP )] = E[aRP2 + bRP + c] = aE(RP2 ) + bE(RP ) + c = a(σP2 + EP2 ) + bEP + c This quantity then depends on the ﬁrst two moments only. 3.2.7.3 Selection of optimal portfolio Let us now consider an investor who shows an aversion for risk and has to choose a portfolio from those on the efﬁcient frontier. We begin by constructing indifference curves in relation to its utility function, that is, the curves that correspond to the couples (expectation, standard deviation) for which the expected utility of return equals a given value (see Figure 3.22). These indifference curves are of close-ﬁtting convex form, the utility increasing as the curve moves upwards and to the left. By superimposing the indifference curves and the efﬁcient frontier, it is easy to determine the portfolio P that corresponds to the maximum expected utility, as shown in Figure 3.23. E

U=4

U=3

U=2

U=1

σ

Figure 3.22 Indifference curves

E

EP

σP

Figure 3.23 Selection of optimal portfolio

σ

90

Asset and Risk Management

3.2.7.4 Other viewpoints Alongside the efﬁcient portfolio based on the investor preference system, shown through the utility function, other objectives or restrictions can be taken into consideration. Let us examine, for example, the case of deﬁcit constraint. As well as optimising the couple (E, σ ), that is, determining the efﬁcient frontier, and before selecting the portfolio (through the utility function), the return on the portfolio here must not be less than a ﬁxed threshold38 u except with a very low probability p, say: Pr[RP ≤ u] ≤ p. If the hypothesis of normality of return is accepted, we have: Pr that is:

RP − EP u − EP ≤ σP σP

≤p

u − EP ≤ zp σP

Here, zp is the p-quantile of the standard normal distribution (zp < 0 as p is less than 1/2). The condition can thus be written as EP ≥ u − zp .σP . The portfolios that obey the deﬁcit constraint are located above the straight line for the equation EP = u = zp .σP (see Figure 3.24). The portion of the efﬁcient frontier delimited by this straight line of constraint is the range of portfolios from which the investor will make his selection. If p is ﬁxed, an increase of u (higher required return) will cause the straight line of constraint to move upwards. In the same way, if u is ﬁxed, a reduction in p (more security with respect for restriction) will cause the straight line of constraint to move upwards while pivoting about the point (0, u). In both cases, the section of the efﬁcient frontier that obeys the restriction is limited. One can also, by making use of these properties, determine the optimal portfolio on the basis of one of the two criteria by using the straight line tangential to the efﬁcient frontier. E

u σ

Figure 3.24 Deﬁcit constraint 38

If u = 0, this restriction means that except in a low probability event, the capital invested must be at least maintained.

Equities

91

3.2.8 The market model Some developments in the market model now include the reasoning contained in the construction of Sharpe’s model, where the index is replaced by the market in its totality. This model, however, contains more of a macroeconomic thought pattern than a search for efﬁcient portfolios. 3.2.8.1 Systematic risk and speciﬁc risk We have already encountered the concept of a systematic security risk in Section 3.1.1: βj =

σj M 2 σM

This measures the magnitude of the risk of the security (j ) in comparison to the risk of the average security on the market. It appears as a regression coefﬁcient when the return on this security is expressed as a linear function of the market return: Rj t = αj + βj RMt + εj t . It is, of course, supposed that the residuals verify the classical hypotheses of the linear regression, establishing among other things that the residuals are of zero expectation and constant variance and are not correlated with the explanatory variable RMt . Alongside the systematic risk βj , which is the same for every period, another source of ﬂuctuation in Rj is the residual εj t , which is speciﬁc to the period t. The term speciﬁc risk is given to the variance in the residuals: σε2j = var(εj t ). Note In practice, the coefﬁcients αj and βj for the regression are estimated using the least square 2 method. For example: βˆj = sj M /sM . The residuals are then estimated by εˆ j t = Rj t − 1 T 2 ˆ ˆj t . (αˆ j + βj RMt ) and the speciﬁc risk is estimated using its ergodic estimator t=1 ε T In the rest of this paragraph, we will omit the index t relating to time. We will see how the risk σj2 for a security consists of a systematic component and a speciﬁc component. We have: σj2 = var(Rj ) = E[(αj + βj RM + εj − E(αj + βj RM + εj ))2 ] = E[(βj (RM − EM ) + εj )2 ] = βj2 E[(RM − EM )2 ] + E(εj2 ) + 2βj E[(RM − EM )εj ] = βj2 var(RM ) + var(εj ) Hence the announced decomposition relation of: 2 + σε2j σj2 = βj2 σM

92

Asset and Risk Management

3.2.8.2 Portfolio beta By using the regression expression for Rj , RP can be developed easily: RP =

N

Xj Rj

j =1

=

N

Xj (αj + βj RM + εj )

j =1

=

N

Xj αj +

j =1

N

Xj βj RM +

j =1

N

Xj εj

j =1

This shows that as for the portfolio return, the portfolio beta is the average of the betas of all the constituent securities, weighted for the proportions expressed in terms of equity market capitalisation: N βP = Xj βj j =1

3.2.8.3 Link between market model and portfolio diversiﬁcation As for the simple index model, it is supposed here that the regression residuals relative to the various securities are not correlated: cov (εi , εj ) = 0 for i = j . The portfolio risk is written as: N N σP2 = var Xj αj + βP RM + Xj εj j =1 2 = βP2 σM +

j =1 N j =1

Xj2 σε2j

If, to simplify matters, one considers a portfolio consisting of N securities in equal proportions: 1 Xj = j = 1, . . . , N N the portfolio risk can develop as follows: N N σP2 = var Xj αj + βP RM + Xj εj j =1

j =1

2 = βP2 σM +

N 1 2 σ N 2 j =1 εj

2 = βP2 σM +

1 2 σ N ε

Equities

93

Here, the average residual variance has been introduced: σε2 =

N 1 2 σ N j =1 εj

The ﬁrst term of the decomposition is independent of N , while the second tends towards 0 when N becomes very large. This analysis therefore shows that the portfolio risk σP2 can be broken down into two terms: 2 2 • The systematic component βP2σM2 (non-diversiﬁable risk). • The speciﬁc component Xj σεj (diversiﬁable risk).

3.3 MODEL OF FINANCIAL ASSET EQUILIBRIUM AND APPLICATIONS 3.3.1 Capital asset pricing model Unlike the previous models, this model, developed independently by W. Sharpe39 and J. Lintner40 and known as CAPM (MEDAF in French) is interested not in choosing a portfolio for an individual investor but in the behaviour of a whole market when the investors act rationally41 and show an aversion to risk. The aim, in this situation, is to determine the exact value of an equity. 3.3.1.1 Hypotheses The model being examined is based on a certain number of hypotheses. The hypotheses relating to investor behaviour are: • They put together their portfolio using Markowitz’s portfolio theory, that is, relying on the mean–variance pairing. • They all have the same expectations, that is, none of them has any privileged information and they agree on the value of the parameters Ei , σi and σij to be used. Hypotheses can also be laid down with regard to the transactions: • They are made without cost. • The purchase, sale and holding times are the same for all investors. Finally, it is assumed that the following conditions have been veriﬁed in relation to the market: • There is no taxation either on increases in value, dividends or interest income. • There are very many purchasers and sellers on the market and they do not have any inﬂuence on the market other than that exerted by the law of supply and demand. 39

Sharpe W., Capital assets prices, Journal of Finance, Vol. 19, 1964, pp. 435–42. Lintner J., The valuation of risky assets and the selection of risky investments, Review of Economics and Statistics, Vol. 47, 1965, pp. 13–37. 41 That is, according to the portfolio theory based on the mean–variance analysis. 40

94

Asset and Risk Management

• There is a risk-free interest rate, RF , which is used for both borrowings and investments. • The possibilities of borrowing and investing at this rate are not limited in terms of volume. These hypotheses are of course not realistic. However, there are extensions of the model presented here, which make some of the hypotheses formulated more ﬂexible. In addition, even the basic model gives good results, as do the applications that arise from it (see Sections 3.3.3, 3.3.4 and 3.3.5). 3.3.1.2 Separation theorem This theorem states that under the conditions speciﬁed above, all the portfolios held by the investors are, in terms of equilibrium, combinations of a risk-free asset and a market portfolio. According to the hypotheses, all the investors have the same efﬁcient frontier for the equities and the same risk-free rate RF . Therefore, according to the study of Markowitz’s model with the risk-free security (Section 3.2.5), each investor’s portfolio is located on the straight line issuing from point (0, RF ) and tangential to the efﬁcient frontier. This portfolio consists (see Figure 3.25) of: • The risk-free equity, in proportion X. • The portfolio A, corresponding to the tangent contact point, in proportion 1 − X. The risked portfolio A is therefore the same for all investors. The market will therefore, in accordance with the principle of supply and demand, adapt the prices so that the proportions in the portfolio are those of the whole market (A = M) and the portfolios held by the investors are perfectly diversiﬁed. The investor’s choice will therefore be made only on the proportion X of the market portfolio (and therefore the 1 − X proportion of the risk-free equity). If the portfolio chosen is located to the left of the point M (0 < X < 1), we are in fact looking at a combination of the two investments. If it is to the right of M (X > 1), the investor borrows at the rate RF in order to acquire more than 100 % of the market portfolio. The line in question is known as the market straight line. EP

A=M

RF σP

Figure 3.25 Separation theorem and market straight line

Equities

95

Interpretation of the separation theorem is simple. The market straight line passes through the points (0, RF ) and (σM , EM ). Its equation is therefore given by: EP = RF +

EM − RF · σP σM

The expected return EP on a portfolio is equal to the risk-free rate RF plus the risk premium collected by the investor when he agrees to take a risk σP . The coefﬁcient of σP (the slope of the market straight line) is therefore the increase in expected return obtained to support one unit of risk: this is the unit price of the risk on the market. 3.3.1.3 CAPM equation We will now determine a relation very similar to the previous one – that is, a relation between expected return and risk – but in connection with a security instead of a portfolio. For any portfolio of equities B, the straight line that connects the points (0, RF ) and (σB , EB ) has the slope EB − RF B = σB This slope is clearly at its maximum when B = M (see Figure 3.26) and, in the same 2 way, the maximum value of B2 is M . Therefore, if one terms the proportions of the various equities in the market portfolio X1 , X2 , . . ., XN , (Xi = 1) we will have:

2 )Xk = 0 ( M

Like

k = 1, . . . , N

N N N Xj Ej − Xj RF = Xj (Ej − RF ) E − RF = M j =1

j =1

N N 2 σ = Xi Xj σij M

j =1

i=1 j =1

EP

M B

RF σP

Figure 3.26 CAPM

96

Asset and Risk Management

the derivative of

2 = M

(EM − RF ) = 2 σM 2

N

2 Xj (Ej − RF )

j =1 N N

Xi Xj σij

i=1 j =1

with respect to Xk is given by: 2 N N N 2 2 Xj (Ej − RF ) (Ek − RF ) · σM − Xj (Ej − RF ) · 2 Xj σkj

2 ( M )Xk =

j =1

j =1

2 − 2(EM − RF )2 2(EM − RF )(Ek − RF )σM

N

Xj σkj

j =1

= =

j =1

4 σM

4 σM 2 2(EM − RF ) · ((Ek − RF )σM − (EM − RF )σkM ) 4 σM

This will be zero if Ek − RF = (EM − RF )

σkM 2 σM

or Ek = RF + βk .(EM − RF ) This is termed the CAPM equation, which is interpreted in a similar way to the relation in the previous paragraph. The expected return Ek on the security (k) is equal to the risk-free rate RF , plus a risk premium collected by the investor who agrees to take the risk. This risk premium is the increase in the expected return, to which more importance is given as the risk of the security within the market in question increases (βk ). Note As we have said, the hypotheses used as a basis for the model just developed are not realistic. Empirical studies have been carried out in order to determine whether the results obtained from the application of the CAPM model are valid. One of the most detailed analyses is that carried out by Fama and Macbeth,42 which, considering the relation Ek = RF + βk (EM − RF ) as an expression of Ek according to βk , tested the following hypotheses on the New York Stock Exchange (Figure 3.27): • The relation Ek = f (βk ) is linear and increasing. • βk is a complete measurement of the risk of the equity (k) on the market; in other words, the speciﬁc risk σε2k is not a signiﬁcant explanation of Ek . 42 Fama E. and Macbeth J., Risk, return and equilibrium: empirical tests, Journal of Political Economy, Vol. 71, No. 1, 1974, pp. 606–36.

Equities

97

Ek

EM

RF

1

bk

Figure 3.27 CAPM test

To do this, they used generalisations of the equation Ek = f (βk ), including powers of βk of a degree greater than 1 and a term that takes the speciﬁc risk into consideration. Their conclusion is that the CAPM model is in most cases acceptable.

3.3.2 Arbitrage pricing theory In the CAPM model, the risk premium Ek − RF for an equity is expressed as a multiple of the risk premium EM − RF for the market: Ek − RF = βk (EM − RF ) The proportionality coefﬁcient is the β of the security. It can therefore be considered that this approach allows the risk premium for an equity to be expressed on the basis of the risk premium for a single explanatory macroeconomic factor, or, which amounts to the same thing, on the basis of an aggregate that includes all the macroeconomic factors that interact with the market. The arbitrage pricing theory43 or APT allows a more reﬁned analysis of the portfolio than does the CAPM, as breaking down the risk according to the single market factor, namely the beta, may prove insufﬁcient to describe all the risks in a portfolio of equities. Hence the interest in resorting to risk breakdowns on the basis of several factors F1 , F2 , . . ., FP . p αkj (EFj − RF ) Ek − RF = j =1

The APT theory shows that in an efﬁcient market the quoted equity prices will be balanced by successive arbitrages, through the involvement of actors on the market. If one makes a point of watching developments in relative prices, it is possible to extract from the market a small number of arbitrage factors that allow the prices to balance out. This is precisely what the APT model does. 43

Ross S. A., The arbitrage theory of capital asset pricing, Journal of Economic Theory, 1976, pp. 343–62.

98

Asset and Risk Management

The early versions44 of the APT model relied on a previously compiled list of basic factors such as an industrial activity index, the spread between short-term and longterm interest rates, the difference in returns on bonds with very different ratings (see Section 4.2.1) etc. The coefﬁcients αk1 , . . . , αkp are then determined by a regression technique based on historical observations Rkt and RFj ,t (j = 1, . . . , p). The more recent versions are based on more empirical methods that provide factors not correlated by a statistical technique45 (factorial analysis), without the number of factors being known beforehand and even without them having any economic interpretation at all. The factors obtained46 from temporal series of returns on asset prices are purely statistical. Taken individually, they are not variables that are commonly used to describe a portfolio construction process or management strategy. None of them represents an interest, inﬂation or exchange rate. They are the equivalent of an orthogonal axis system in geometry. The sole aim is to obtain a referential that allows a description of the interrelations between the assets studied on a stable basis over time. Once the referential is established, the risk on any asset quoted (equities, bonds, investment funds etc.) is broken down into a systematic part (common to all assets in the market) that can be represented in the factor space, and a speciﬁc part (particular to the asset). The systematic part is subsequently explained by awareness coefﬁcients (αkj ) for the different statistical factors. The explanatory power of the model can be explained by the fact that the different standard variables (economic, sectorial, fundamental etc.) used to understand the way in which it behaves are also represented in the referential for the factors provided an associated quoted support exists (price history). The relation that links the return on a security to the various factors allows a breakdown of its variance into a part linked to the systematic risk factors (the explicative statistical factors) and a part that is speciﬁc to the securities and therefore diversiﬁable (regression residues etc.), that is: p 2 2 αkj var(RFj ) + σε2k σk = j =1

Example A technically developed version of this method, accompanied by software, has been produced by Advanced Portfolio Technologies Inc. It extracts a series of statistical factors (represented by temporal series of crossed returns on assets) from the market, using a form search algorithm. In this way, if the left of Figure 3.28 represents the observed series of returns on four securities, the straight line on the same ﬁgure illustrates the three primary factors that allow reconstruction of the previous four series by linear combination. For example, the ﬁrst series breaks down into: R1 − RF = 1 · (RF1 − RF ) + 1 · (RF2 − RF ) + 0.(RF3 − RF ) + ε1 . 44 Dhrymes P. J., Friend I. and Gultekin N. B., A critical re-examination of the empirical evidence on the arbitrage pricing theory, Journal of Finance, No. 39, 1984, pp. 323–46. Chen N. F., Roll R. and Ross S. A., Economic forces of the stock market, Journal of Business, No. 59, 1986, pp. 383–403. More generally, Grinold C. and Kahn N., Active Portfolio Management, McGraw-Hill, 1998. 45 See for example Saporta G., Probabilities, Data Analysis and Statistics, Technip, 1990; or Morrison D., Multivariate Statistical Methods, McGraw-Hill, 1976. 46 Readers interested in the mathematical developments produced by extracting statistical factors from historical series of returns on assets should read Mehta M. L., Random Matrices, Academic Press, 1996. This work deals in depth with problems of proper values and proper vectors for matrices with very large numbers of elements generated randomly.

Equities Series 4

99

Factor 3

Series 3 Factor 2 Series 2 Series 1 Factor 1

Figure 3.28 Arbitrage pricing theory

3.3.3 Performance evaluation 3.3.3.1 Principle The portfolio manager,47 of course, has an interest in the product that he manages. To do this properly, he will compare the return on his portfolio with the return on the market in which he is investing. From a practical point of view, this comparison will be made in relation to a market representative index for the sector in question. Note The return on a real portfolio between moments s and t is calculated simply using the V t − Vs relation RP ,[s;t] = , provided there has been no movement within the portfolio Vs during the interval of time in question. However, there are general ﬂows (new securities purchased, securities sold etc.). It is therefore advisable to evaluate the return with the effect of these movements eliminated. Note t1 < . . . < tn the periods in which these movements occur and propose that t0 = s and tn+1 = t. The return to be taken into consideration is therefore given by: RP ,]s;t] =

n

(1 + RP ,]tk ;tk+1 [ ) − 1

k=0

Here, it is suggested that RP ,[tk ;tk+1 ] =

− Vt(+) Vt(−) k+1 k Vt(+) k

Vt(−) and Vt(+) represent the value of the portfolio just before and just after the movement j j at moment tj respectively. In Section 3.2, it has been clearly shown that the quality of a security or a portfolio is not measured merely by its return. What should in fact be thought of those portfolios A and B in which the returns for a given period are 6.2 % and 6.3 % respectively but the attendant of B is twice that of A. The performance measurement indices presented below take into account not just the return, but also the risk on the security or portfolio. 47

Management strategies, both active and passive, are dealt with in the following paragraph.

100

Asset and Risk Management

The indicators shown by us here are all based on relations produced by the ﬁnancial asset valuation model and more particularly on the CAPM equation. They therefore assume that the hypotheses underlying this model are satisﬁed. The ﬁrst two indicators are based on the market straight-line equation and the CAPM equation respectively; the third is a variation on the second. 3.3.3.2 Sharpe index The market straight-line equation is: EP = RF +

EM − RF · σP σM

which can be rewritten as follows: EM − RF EP − RF = σP σM This relation expresses that the excess return (compared to the risk-free rate), standardised by the standard deviation, is (in equilibrium) identical to a well-diversiﬁed portfolio and for the market. The term Sharpe index is given to the expression SIP =

EP − RF σP

which in practice is compared to the equivalent expression calculated for a market representative index. Example Let us take the data used for the simple Sharpe index model (Section 3.2.4): E1 = 0.05 σ1 = 0.10 ρ12 = 0.3

E2 = 0.08 σ2 = 0.12 ρ13 = 0.1

E3 = 0.10 σ3 = 0.15 ρ23 = 0.4

Let us then consider the speciﬁc portfolio relative to the value λ = 0.010 for the risk parameter. In this case, we will have X1 = 0.4387, X2 = 0.1118 and X3 = 0.4496, and therefore EP = 0.0758 and σP = 0.0912. We will also have EI = 0.04 and σI = 0.0671, and RF is taken, as in Section 3.2.5, as 0.03. The Sharpe index for the portfolio is therefore given as: SIP =

0.0758 − 0.03 = 0.7982 0.0912

The Sharpe index relative to the index equals: SII =

0.04 − 0.03 = 0.1490 0.0671

This shows that the portfolio in question is performing better than the market.

Equities

101

Although Section 3.3.4 is given over to the portfolio management strategies for equities, some thoughts are also given on the role of the Sharpe index in the taking of investment (and disinvestment) decisions. Suppose that we are in possession of a portfolio P and we are envisaging the purchase of an additional total of equities A, the proportions of P and A being noted respectively as XP and XA . Of course, XP + XA = 1 and XA is positive or negative depending on whether an investment or a disinvestment is involved. The portfolio produced as a result of the decision taken will be noted as P and its return will be given by RP = XP RP + XA RA . The expected return and variance on return for the new portfolio are EP = (1 − XA )EP + XA EA σP2 = (1 − XA )2 σP2 + XA2 σA2 + 2XA (1 − XA )σP σA ρAP We admit as the purchase criterion for A the fact that the Sharpe index for the new portfolio is at least equal to that of the old one: SIP ≥ SIP , which is expressed as: EP − RF (1 − XA )EP + XA EA − RF ≥ σP σP By isolating the expected return on A, we obtain as the condition EA ≥ EP +

σP EP − RF −1 σP XA

It is worth noting that if A does not increase the risk of the portfolio (σP ≤ σP ), it is not even necessary for EA ≥ EP to purchase A. Example Suppose that one has a portfolio for which EP = 0.08, that the risk-free total is RF = 0.03 and that one is envisaging a purchase of A at the rate XA = 0.02. The condition then becomes 5 σP EA ≥ 0.08 + −1 2 σP In the speciﬁc case where the management of risks is such that σA = σP , the ratio of the standard deviations is given by σP = σP =

(1 − XA )2 + XA2 + 2XA (1 − XA )ρAP 0.9608 + 0.0392ρAP

This allows the conditions of investment to be determined according to the correlation coefﬁcient value: if ρAP = −1, 0 or 1, the condition becomes EA ≥ 0.02, EA ≥ 0.0305 and EA ≥ 0.08 respectively.

102

Asset and Risk Management

3.3.3.3 Treynor index The CAPM equation for the k th equity in the portfolio, Ek = RF + βk (EM − RF ), allows the following to be written: N N N Xk Ek = Xk · RF + Xk βk · (EM − RF ) k=1

k=1

k=1

or, EP = RF + βP (EM − RF ) Taking account of the fact that βM = 1, this last relation can be written as: EM − RF EP − RF = βP βM The interpretation is similar to that of the Sharpe index. The Treynor index is therefore deﬁned by: EP − RF TI P = βP which will be compared to the similar expression for an index. Example Let us take the data above, with the addition of (see Section 3.2.4): β1 = 0.60, β2 = 1.08, β3 = 1.32. This will give βP = 0.9774. The Treynor index for this portfolio is therefore obtained by: TI P =

0.0758 − 0.03 = 0.0469 0.9774

meanwhile, the index relative to the index is TI I =

0.04 − 0.03 = 0.0100 1

This will lead to the same conclusion. 3.3.3.4 Jensen index According to the reasoning in the Treynor index, we have EP − RF = βP (EM − RF ). This relation being relative (in equilibrium) for a well-diversiﬁed portfolio, a portfolio P will present an excess of return in relation to the market if there is a number αP > 0 so that: EP − RF = αP + βP (EM − RF ). The Jensen index, JI P = α, ˆ is the estimator for the constant term of the regression: EP ,t − RF,t = α + β (EI,t − RF,t ). For this, the variable to be explained (explanatory) is the excess of return of portfolio in relation to the risk-free rate (excess of return of market representative index). Its value is, of course, compared to 0.

Equities

103

Example It is easy to verify that with the preceding data, we have J IP = (0.0758 − 0.03) − 0.9774 · (0.04 − 0.03) = 0.0360, which is strictly positive. 3.3.4 Equity portfolio management strategies 3.3.4.1 Passive management The aim of passive management is to obtain a return equal to that of the market. By the deﬁnition of the market, the gains (returns higher than market returns) realised by certain investors will be compensated by losses (returns lower than market returns) suffered by other investors:48 the average return obtained by all the investors is the market return. The reality is a little different: because of transaction costs, the average return enjoyed by investors is slightly less than the market return. The passive strategy therefore consists of: • Putting together a portfolio of identical (or very similar) composition to the market, which corresponds to optimal diversiﬁcation. • Limiting the volume of transactions as far as is possible. This method of operation poses a number of problems. For example, for the management of some types of portfolio, regulations dictate that each security should only be present to a ﬁxed maximum extent, which is incompatible with passive management if a security represents a particularly high level of stock-exchange capitalisation on the market. Another problem is that the presence of some securities that not only have high rates but are indivisible, and this may lead to the construction of portfolios with a value so high that they become unusable in practice. These problems have led to the creation of ‘index funds’, collective investment organisations that ‘imitate’ the market. After choosing an index that represents the market in which one wishes to invest, one puts together a portfolio consisting of the same securities as those in the index (or sometimes simply the highest ones), in the same proportions. Of course, as and when the rates of the constituent equities change, the composition of the portfolio will have to be adapted, and this presents a number of difﬁculties. The reaction time inevitably causes differences between the return on the portfolio and the market return; these are known as ‘tracking errors’. In addition, this type of management incurs a number of transaction costs, for adapting the portfolio to the index, for reinvesting dividends etc. For these reasons, the return on a certain portfolio will in general be slightly lower than that of the index. 3.3.4.2 Active management The aim of active management is to obtain a return higher than the market return. A fully efﬁcient market can be beaten only temporarily and by chance: in the long term, the return cannot exceed the market return. Active management therefore suggests that the market is fully efﬁcient. 48 This type of situation is known in price theory as a zero total game. Refer for example to Binmore K., Jeux et th´eorie des jeux, De Boeck & Larcier, 1999.

104

Asset and Risk Management

Two main principles allow the target set to be achieved. 1) Asset allocation, which evolves over time and is also known as market timing, consists of putting together a portfolio consisting partly of the market portfolio or an index portfolio and partly of a risk-free asset (or one that is signiﬁcantly less risk than equities, such as a bond). The respective proportions of these two components are then changed as time passes, depending on whether a rise or a fall in the index is anticipated. 2) Stock picking consists of putting together a portfolio of equities by choosing the securities considered to be undervalued and likely to produce a return higher than the market return in the near or more distant future (market reaction). In practice, professionals use strategies based on one of the two approaches or a mixture of the two. In order to assess the quality of active management, the portfolio put together should be compared with the market portfolio from the point of view of expected return and of risk incurred. These portfolio performance indexes have been studied in Section 3.3.3. Let us now examine some methods of market timing and a method of stock picking: the application of the dividend discount model.

3.3.4.3 Market timing This technique therefore consists of managing a portfolio consisting of the market portfolio (M) for equities and a bond rate (O) in the respective proportions X and 1 − X, X being adapted according to the expected performance of the two components. These performances, which determine a market timing policy, may be assessed using different criteria: • The price-earning ratio, introduced in Section 3.1.3: PER = rate/proﬁt. • The yield gap, which is the ratio between the return on the bond and the return on the equities (dividend/rate). • The earning yield, which is the product of the PER by the bond rate. • The risk premium, which is the difference between the return on the market portfolio and the return on the bond: RP = EM − EO . It may be estimated using a history, but it is preferable to use an estimation produced beforehand by a ﬁnancial analyst, for example using the DDM (see below). Of course, small values for the ﬁrst three criteria are favourable to investment in equities; the situation is reversed for the risk premium. The ﬁrst method for implementing a market timing policy is recourse to decision channels. If one refers to one of the four criteria mentioned above as c, for which historical observations are available (and therefore an estimation c for its average and sc for its standard deviation), we choose, somewhat arbitrarily, to invest a certain percentage of equities depending on the observed value of c compared to c, the difference between the two being modulated by sc . We may choose for example to invest 70 %, 60 %, 50 %, 40 %

Equities

105

c 30 %

40 %

c

50 %

t

60 %

70 %

Figure 3.29 Fixed decision channels

Figure 3.30 Moving decision channels

or 30 % in equities depending on the position of c in relation to the limits:49 c − 32 sc , c − 1 s , c + 12 sc and c + 32 sc (Figure 3.29). 2 c This method does not take account of the change of the c parameter over time. The c and sc parameters can therefore be calculated over a sliding history (for example, one year) (Figure 3.30). Another, more rigorous method can be used with the risk premium only. In the search for the efﬁcient frontier, we have looked each time for the minimum with respect to the proportions of the expression σP2 − λEP in which the λ parameter corresponds to the risk (λ = 0 for a cautious portfolio, λ = +∞ for a speculative portfolio). This parameter is equal to the slope of the straight line in the plane (E, σ 2 ) tangential to the efﬁcient frontier and coming from the point (RF , 0). According to the separation theorem (see Section 3.3.1), the contact point for this tangent corresponds to the market portfolio (see 2 σM . Figure 3.31) and in consequence we have: λ = EM − RF In addition, the return on portfolio consisting of a proportion X of the market portfolio and a proportion 1 − X of the bond rate is given by RP = XRM + (1 − X)RO , which 49

The order of the channels must be reversed for the risk premium.

106

Asset and Risk Management σP2

2 σM

RF

EM

EP

Figure 3.31 Separation theorem

allows the following to be determined: EP = XEM + (1 − X)EO 2 σP2 = X2 σM + 2X(1 − X)σMO + (1 − X)2 σO2

The problem therefore consists of determining the value of X, which minimises the expression: 2 + 2X(1 − X)σMO + (1 − X)2 σO2 − λ[XEM + (1 − X)EO ]. Z(X) = σP2 − λEP = X2 σM

The derivative of this function: 2 Z (X) = 2XσM + 2(1 − 2X)σMO − 2(1 − X)σO2 − λ(EM − EO ) 2 = 2X(σM − 2σMO + σO2 ) + 2σMO − 2σO2 − λ · RP

provides the proportion sought: X=

λ · RP − 2(σMO − σO2 ) 2 2(σM − 2σMO + σO2 )

or, in the same way, replacing λ and RP by their value: EM − E0 2 · σM − 2(σMO − σO2 ) EM − RF X= 2 2(σM − 2σMO + σO2 ) Example If we have the following data: EM = 0.08 EO = 0.06 RF = 0.04

σM = 0.10 σO = 0.02 ρMO = 0.6

Equities

107

we can calculate successively: σMO = 0.10 · 0.02 · 0.6 = 0.0012 0.102 = 0.25 0.08 − 0.04 PR = 0.08 − 0.06 = 0.02 λ=

and therefore: X=

0.25 · 0.02 − 2 · (0.0012 − 0.022 ) = 0.2125 2 · (0.102 − 2 · 0.0012 + 0.022 )

Under these conditions, therefore, it is advisable to invest 21.25 % in equities (market portfolio) and 78.75 % in bonds. 3.3.4.4 Dividend discount model The aim of the dividend discount model, or DDM, is to compare the expected return of an equity and its equilibrium return, which will allow us to determine whether it is overvalued or undervalued. The expected return, R˜ k , is determined using a model for updating future dividends. A similar reasoning to the type used in the Gordon–Shapiro formula (Section 3.1.3), or a generalisation of that reasoning, can be applied. While the Gordon–Shapiro relation suggests a constant rate of growth for dividends, more developed models (two-rate model) use, for example, a rate of growth constant over several years followed by another, lower rate for subsequent years. Alternatively, a three-rate model may be used with a period of a few years between the two constant-rate periods in which the increasing rate reduces linearly in order to make a continuous connection. The return to equilibrium Ek is determined using the CAPM equation (Section 3.3.1). This equation is written Ek = RF + βk (EM − RF ). If one considers that it expresses Ek as a function of βk , we are looking at a straight-line equation; the line passes through the point (0, RF ) and since βM = 1, through the point (1, EM ). This straight line is known as the ﬁnancial asset evaluation line or the security market line. If the expected return R˜ k for each security is equal to its return on equilibrium Ek , all the points (βk , R˜ k ) will be located on the security market line. In practice, this is not the case because of certain inefﬁciencies in the market (see Figure 3.32). ~ Rk • • EM

•

•

•

•

• RF

•

• •

•

1

Figure 3.32 Security market line

bk

108

Asset and Risk Management

This technique considers that the R˜ k evaluation made by the analysts is correct and that the differences noted are due to market inefﬁciency. Therefore, the securities whose representative point is located above the security market line are considered to be undervalued, and the market should sooner or later rectify the situation and produce an additional return for the investor who purchased the securities.

3.4 EQUITY DYNAMIC MODELS The above paragraphs deal with static aspects, considering merely a ‘photograph’ of the situation at a given moment. We will now touch on the creation of models for developments in equity returns or rates over time. The notation used here is a little different: the value of the equity at moment t is noted as St . This is a classic notation (indicating ‘stock’), and in addition, the present models are used among other things to support the development of option valuation models for equities (see Section 5.3), for which the notation Ct is reserved for equity options (indicating ‘call’). Finally, we should point out that the following, unless speciﬁed otherwise, is valid only for equities that do not give rise to the distribution of dividends. 3.4.1 Deterministic models 3.4.1.1 Discrete model Here, the equity is evaluated at moments t = 0, 1, etc. If it is assumed that the return on St+1 − St , which leads the equity between moments t and t + 1 is i, we can write: i = St to the evolution equation St+1 = St · (1 + i). If the rate of return i is constant and the initial value S0 is taken into account, the equation (with differences) above will have the solution: St = S0 · (1 + i)t . If the rate varies from period to period (ik for the period] k − 1; k]), the previous relation becomes St = S0 . (1 + i1 ) (1 + i2 ) . . . (1 + it ). 3.4.1.2 Continuous model We are looking here at an inﬁnitesimal development in the value of the security. If it is assumed that the return between moments t and t + t (with ‘small’ t) is proportional to the duration t with a proportionality factor δ: δ · t =

St+ t − St St

the evolution equation is a differential equation50 St = St · δ. The solution to this equation is given by St = S0 · eδt . The link will be noted between this relation and the relation corresponding to it for the discrete case, provided δ = ln (1 + i). 50

Obtained by making t tend towards 0.

Equities

109

If the rate of return δ is not constant, the differential development equationwill take t δ(t) dt the form S t = St · δ (t), thus leading to the more complex solution St = S0 · e 0 . Note The parameters appear in the above models (the constant rates i and δ, or the variable rates i1 , i2 , . . . and δ(t)) should of course for practical use be estimated on the basis of historical observations. 3.4.1.3 Generalisation These two aspects, discrete and continuous, can of course be superimposed. We therefore consider: • A continuous evolution of the rate of return, represented by the function δ(t). On top of this: • A set of discrete variations occurring at periods τ1 , τ2 , . . . , τn so that the rate of return between τk−1 and τk is equal to ik . If n is the greatest integer so that τn ≤ t, the change in the value is given by τ1

τ2 −τ1

St = S0 · (1 + i1 ) (1 + i2 )

τn −τn−1

. . . (1 + in )

t−τn

(1 + in+1 )

·e

t 0

δ(t) dt

.

This presentation will allow the process of dividend payment, for example, to be taken into consideration in a discrete or continuous model. Therefore, where the model includes only the continuous section represented by δ(t), the above relation represents the change in the value of an equity that pays dividends at periods τ1 , τ2 etc. with a total Dk paid in τk and linked to ik by the relation ik = −

Dk Sk(−)

Here, Sk (−) is the value of the security just before payment of the k th dividend. 3.4.2 Stochastic models 3.4.2.1 Discrete model It is assumed that the development from one period to another occurs as follows: equity at moment t has the (random) value St and will at the following moment t + 1 have one of the two values St .u (higher than St ) or St .d (lower than St ) with the respective probabilities of α and (1 − α). We therefore have d ≤ 1 ≤ u, but it is also supposed that d ≤ 1 < 1 + RF ≤ u, without which the arbitrage opportunity will clearly be possible. In practice, the parameters u, d and α should be estimated on the basis of observations.

110

Asset and Risk Management

Generally speaking, the following graphic representation is used for evolutions in equity prices: S = St · u (α) −−→ t+1 − St − −−→ St+1 = St · d (1 − α) It is assumed that the parameters u, d and α remain constant over time and we will no longer clearly show the probability α in the following graphs; the rising branches, for example, will always correspond to the increase (at the rate u) in the value of the security with the probability α. Note that the return of the equity between the period t and (t + 1) is given by St+1 − St u − 1 (α) = d − 1 (1 − α) St Between the moments t + 1 and t + 2, we will have, in the same way and according to the branch obtained at the end of the previous period: 2 → St+2 = St+1 · u = St · u − − − St+1−− −→ S = S · d = S · ud t+2 t+1 t

or

St+2 = St+1 · u = St · ud −−→ − St+1− −−→ St+2 = St+1 · d = St · d 2

It is therefore noted that a rise followed by a fall leads to the same result as a fall followed by a rise. Generally speaking, a graph known as a binomial trees can be constructed (see Figure 3.33), rising from period 0 (when the equity has a certain value S0 ) to the period t. It is therefore evident that the (random) value of the equity at moment t is given by St = S0 · uN d t−N , in which the number N of rises is of course a random binomial variable51 with parameters (t; α): t α k (1 − α)t−k Pr[N = k] = k The following property can be demonstrated: E(St ) = S0 · (αu + (1 − α)d)t

S0 . u S0

S0 . d

S0 . u2 S0 . ud S0 . d2

S0 . u3

…

S0 . u2d

…

S0 . ud2

…

S0 . d2

…

Figure 3.33 Binomial tree 51

See Appendix 2 for the development of this concept and for the properties of the random variable.

Equities

111

In fact, what we have is: E(St ) =

t k=0

= S0 ·

k t−k

S0 · u d

t t k=0

k

t α k (1 − α)t−k · k

(αu)k ((1 − α)d)t−k

This leads to the relation declared through the Newton binomial formula. Note that this property is a generalisation for the random case of the determinist formula St = S0 · (1 + i)t . 3.4.2.2 Continuous model The method of equity value change shown in the binomial model is of the random walk type. At each transition, two movements are possible (rise or fall) with unchanged probability. When the period between each transaction tends towards 0, this type of random sequence converges towards a standard Brownian motion or SBM.52 Remember that we are looking at a stochastic process wt (a random variable that is a function of time), which obeys the following processes: • w0 = 0. • wt is a process with independent increments : if s < t < u, then wu − wt is independent of wt − ws . • wt is a process with stationary increments : the random variables wt+h − wt and wh are identically distributed. • Regardless of what t may be, the random variable √ wt is distributed according to a normal law of zero mean and standard deviation t: fwt (x) = √

1 2πt

e−x

2

/2t

The ﬁrst use of this process for modelling the development in the value of a ﬁnancial asset was produced by L. Bachelier.53 He assumed that the value of a security at a moment t is a ﬁrst-degree function of the SBM: St = a + bwt . According to the above deﬁnition, a is the value of the security at t = 0 and b is a measure of the volatility σ of the security for each unit of time. The relation used was therefore St = S0 + σ · wt . The shortcomings of this approach are of two types: • The same absolute variation (¤10 for example) corresponds to variations in return that are very different depending on the level of price (20 % for a quotation of ¤50 and 5 % for a value of ¤200). • The √ random variable St follows a normal law with mean S0 and standard deviation σ t; this model therefore allows for negative prices. 52

Appendix 2 provides details of the results, reasoning and properties of these stochastic processes. Bachelier L., Th´eorie de la sp´eculation, Gauthier-Villars, 1900. Several more decades were to pass before this reasoning was ﬁnally accepted and improved upon. 53

112

Asset and Risk Management

For this reason, P. Samuelson54 proposed the following model. During the short interval of time [t; t + dt], the return (and not the price) alters according to an Itˆo process : St+dt − St dSt = = ER · dt + σR · dwt St St Here, the non-random term (the trend) is proportional to the expected return and the stochastic term involves the volatility for each unit of time in this return. This model is termed a geometric Brownian motion. Example Figure 3.34 shows a simulated trajectory (development over time) for 1000 very short periods with the values ER = 0.1 and σR = 0.02, based on a starting value of S0 = 100. We can therefore establish the ﬁrst property in the context of this model: the stochastic process St showing the changes in the value of the equity can be written as σR2 St = S0 · exp ER − · t + σR · wt 2 This shows that St follows a log-normal distribution (it can only take on positive values). In fact, application of the Itˆo formula55 to the function f (x, t) = ln ·x where x = St , we obtain: 1 1 2 2 1 d(ln St ) = 0 + ER St − 2 σR St · dt + σR St · dwt St St 2St 2 σ = ER − R · dt + σR · dwt 2 σ2 This equation resolves into: ln St = C ∗ + ER − R · t + σR · wt 2 101.2 101 100.8 100.6 100.4 100.2 100 99.8 99.6 99.4 99.2

1

101

201

301

401

501

601

701

801

901

Figure 3.34 Geometric Brownian motion 54 55

Samuelson P., Mathematics on speculative price, SIAM Review, Vol. 15, No. 1, 1973. See Appendix 2.

1001

Equities

113

The integration constant C ∗ is of course equal to ln S0 and the passage to the exponential gives the formula declared. It is then easy to deduce the moments of the random variable St : E(St ) = S0 · eER t var(St ) = S02 · e2ER t (eσR t − 1) 2

The ﬁrst of these relations shows that the average return E(St /S0 ) on this equity over the interval [0; t] is equivalent to a capitalisation at the instant rate, ER . A second property can be established, relative to the instant return on the security over the interval [0; t]. This return obeys a normal distribution with mean and standard deviation, shown by σR2 σR ER − ;√ 2 t This result may appear paradoxical, as the average of the return is not equal to ER . This is because of the structure of the stochastic process and is not incompatible with the intuitive solution, as we have E(St ) = S0 · eER t . To establish this property, expressing the stochastic instant return process as δt , we can write St = S0 · eδt ·t, that is, according to the preceding property, 1 St δt = · ln t S0 σR2 wt = ER − + σR · 2 t This establishes the property.

4 Bonds 4.1 CHARACTERISTICS AND VALUATION To an investor, a bond is a ﬁnancial asset issued by a public institution or private company corresponding to a loan that confers the right to interest payments (known as coupons) and repayment of the loan upon maturity. It is a negotiable security and its issue price, redemption value, coupon total and life span are generally known and ﬁxed beforehand. 4.1.1 Deﬁnitions A bond is characterised by various elements: 1. The nominal value or NV of a bond is the amount printed on the security, which, along with the nominal rate of the bond, allows the coupon total to be determined. 2. The bond price is shown as P . This may be the price at issue (t = 0) or at any subsequent moment t. The maturity price is of course identical to the redemption value or R mentioned above. 3. The coupons Ct constitute the interest paid by the issuer. These are paid at various periods, which are assumed to be both regular and annual (t = 1, 2, . . . , T ). 4. The maturity T represents the period of time that separates the moment of issue and the time of reimbursement of the security. The ﬁnancial ﬂows associated with a bond are therefore: • From the purchaser, the payment of its price; this may be either the issue price paid to the issuer or the rate of the bond paid to any seller at a time subsequent to the issue. • From the issuer, the payment of coupons from the time of acquisition onwards and the repayment on maturity. The issue price, nominal value and repayment value are not necessarily equal. There may be premiums (positive or negative) on issue and/or on repayment. The bonds described above are those that we will be studying in this chapter; they are known as ﬁxed-rate bonds. There are many variations on this simple bond model. It is therefore possible for no coupons to be paid during the bond’s life span, the return thus being only the difference between the issue price and the redemption value. This is referred to as a zero-coupon bond .1 This kind of security is equivalent to a ﬁxed-rate investment. There are also bonds more complex than those described above, for example:2 • Variable rate bonds, for which the value of each coupon is determined periodically according to a parameter such as an index. 1

A debenture may therefore, in a sense, be considered to constitute a superimposition of zero-coupon debentures. Read for example Colmant B., Delfosse V. and Esch L., Obligations, Les notions ﬁnanci`eres essentielles, Larcier, 2002. Also: Fabozzi J. F., Bond Markets, Analysis and Strategies, Prentice-Hall, 2000. 2

116

Asset and Risk Management

• Transition bonds, which authorise repayment before the maturity date. • Lottery bonds, in which the (public) issuer repays certain bonds each year in a draw. • Convertible bonds (convertible into equities) etc. 4.1.2 Return on bonds The return on a bond can of course be calculated by the nominal rate (or coupon rate) rn , which is deﬁned as the relation between the total value of the coupon and the nominal value C rn = NV This deﬁnition, however, will only make sense if all the different coupons have the same value. It can be adapted by replacing the denominator with the price of the bond at a given moment. The nominal rate is of limited interest, as it does not include the life span of the bond at any point; using it to describe two bonds is therefore rather pointless. For a ﬁxed period of time (such as one year), it is possible to use a rate of return equivalent to the return on one equity: Pt + Ct − Pt−1 Pt−1 This concept is, however, very little used in practice. 4.1.2.1 Actuarial rate on issue The actuarial rate on issue, or more simply the actuarial rate (r) of a bond is the rate for which there is equality between the discounted value of the coupons and the repayment value on one hand and the issue price on the other hand: P =

T

Ct (1 + r)−t + R(1 + r)−T

t=1

Example Consider for example a bond with a period of six years and nominal value 100, issued at 98 and repaid at 105 (issue and reimbursement premiums 2 and 5 respectively) and a nominal rate of 10 %. The equation that deﬁnes its actuarial rate is therefore: 98 =

10 10 10 10 10 10 + 105 + + + + + 1+r (1 + r)2 (1 + r)3 (1 + r)4 (1 + r)5 (1 + r)6

This equation (sixth degree for unknown r) can be resolved numerically and gives r = 0.111044, that is, r = approximately 11.1 %. The actuarial rate for a zero-coupon bond is of course the rate for a risk-free investment, and is deﬁned by P = R(1 + r)−T

Bonds

117

The rate for a bond issued and reimbursable at par (P = N V = R), with coupons that are equal (Ct = C for all t) is equal to the nominal rate: r = rn . In fact, for this particular type of bond, we have: P =

T

C(1 + r)−t + P (1 + r)−T

t=1

=C

(1 + r)−1 − (1 + r)−T −1 + P (1 + r)−T 1 − (1 + r)−1

=C

1 − (1 + r)−T + P (1 + r)−T r

From this, it can be deduced that r = C/P = rn . 4.1.2.2 Actuarial return rate at given moment The actuarial rate as deﬁned above is calculated when the bond is issued, and is sometimes referred to as the ex ante rate. It is therefore assumed that this rate will remain constant throughout the life of the security (and regardless of its maturity date). A major principle of ﬁnancial mathematics (the principle of equivalence) states that this rate does not depend on the moment at which the various ﬁnancial movements are ‘gathered in’. Example If, for the example of the preceding paragraph (bond with nominal value of 100 issued at 98 and repaid at 105), paying an annual coupon of 10 at the end of each of the security’s six years of life) and with an actuarial rate of 11.1 %, one examines the value acquired for example on the maturity date, we have: • for the investment, 98 · (l + r)6 ; • for the generated ﬁnancial ﬂows: 10 · [(l + r)5 + (l + r)4 + (l + r)3 + (l + r)3 + (l + r)2 + (l + r)1 + l] + 105. The equality of these two quantities is also realised for r = 11.1 %. If we now place at a given moment t anywhere between 0 and T , and are aware of the change in the market rate between 0 and t, the actuarial rate of return at the moment3 t, which we will call4 r(t), is the rate for which there is equality between: 1. The value of the investment acquired at t, calculated at this rate r(t). 2. The sum of: — The value of the coupons falling due acquired at t, reinvested at the current rate observed between 0 and t. 3 4

This is sometimes known as the ex post rate. r(0) = r is of course the actuarial rate at issue.

118

Asset and Risk Management

— The discounted value in t of the ﬁnancial ﬂows generated subsequent to t, calculated using the market rate at the moment t. Example Let us take the same example as above. Suppose that we are at the moment in time immediately subsequent to payment of the third coupon (t = 3) and the market rate has remained at 11.1 % for the ﬁrst two years and has now changed to 12 %. The above deﬁnition gives us the equation that deﬁnes the actuarial rate of return for the speciﬁc moment t = 3. 10 10 115 98 · (1 + r(3))3 = (10 · 1.111 · 1.12 + 10 · 1.12 + 10) + + + 1.12 1.122 1.123 = 35.33 + 98.76 = 134.09 This gives r(3) = 11.02 %. It will of course be evident that if the rate of interest remains constant (and equal to 11.1) for the ﬁrst three years, the above calculation would have led to r(3) = 11.1 %, this being consistent with the principle of equivalence. This example clearly shows the phenomenon of bond risk linked to changes in interest rates. This phenomenon will be studied in greater detail in Section 4.2.1. 4.1.2.3 Accrued interest When a bond is acquired between two coupon payment dates, the purchaser pays not only the value of the bond for that speciﬁc moment but also the portion of the coupon to come, calculated in proportion to the period that has passed since payment of the last coupon. The seller, in fact, has the right to partial interest relating to the period from the last coupon payment to the moment of the deal. This principle is called the accrued interest system and the price effectively paid is the dirty price, as opposed to the clean price, which represents merely the rate for the bond at the time of the deal. Let us consider a bond of maturity T and non integer moment t + θ (integer t and 0 ≤ θ 0, constitutes the term interest-rate structure at moment 0 and the graph for this function is termed the yield curve. The most natural direction of the yield curve is of course upwards; the investor should gain more if he invests over a longer period. This, however, is not always the case; in practice we frequently see ﬂat curves (constant R(s) value) as well as increasing curves, as well as inverted curves (decreasing R(s) value) and humped curves (see Figure 4.4). R(s)

R(s)

s

R(s)

s

R(s)

s

s

Figure 4.4 Interest rate curves 13 A detailed presentation of these concepts can be found in Bisi`ere C., La Structure par Terme des Taux d’int´erˆet, Presses Universitaires de France, 1997. 14 This justiﬁes the title of this present section, which mentions ‘interest rates’ and not bonds.

130

Asset and Risk Management

4.3.2 Static interest rate structure The static models examine the structure of interest rates at a ﬁxed moment, which we will term 0, and deal with a zero-coupon bond that gives rise to a repayment of 1, which is not a restriction. In this and the next paragraph, we will detail the model for the discrete case and then generalise it for the continuous case. These are the continuous aspects that will be used in Section 4.5 for the stochastic dynamic models. 4.3.2.1 Discrete model The price at 0 for a bond of maturity level s is termed15 P0 (s) and the associated spot rate is represented by R0 (s). We therefore have: P0 (s) = (1 + R0 (s))−s . The spot interest rate at R0 (s) in fact combines all the information on interest rates relative to period [0,1], [1, 2] . . ., [s − 1, s]. We will give the symbol r(t) and the term term interest rate or short-term interest rate to the aspects relative to the period [t − 1; t]. We therefore have: (1 + R0 (s))s = (1 + r(1)). (1 + r(2)). . . .. (1 + r(s)). Reciprocally, it is easy to express the terms according to the spot-rate terms: r(1) = R0 (1) (1 + R0 (s))s s = 2, 3, . . . 1 + r(s) = (1 + R0 (s − 1))s−1 In the same way, we have: r(s) =

P0 (s − 1) −1 P0 (s)

(s > 0)

To sum up, we can easily move from any one of the following three structures to another; the price structure {R0 (s) : s = 1, 2, . . .} and the term interest structure {r(s) : s = 1, 2, . . .}. Example Let us consider a spot rate structure deﬁned for maturity dates 1–6 shown in Table 4.1. This (increasing) structure is shown in Figure 4.5. From this, it is easy to deduce prices and term rates: for example: P0 (5) = 1.075−5 = 0.6966 r(5) = 1.0755 /1.0734 − 1 = 0.0830 This generally gives data shown in Table 4.2. Table 4.1 s R0 (s) 15

Of course, P0 (0) = 1.

Spot-rate structure 1

2

3

4

5

6

6.0 %

6.6 %

7.0 %

7.3 %

7.5 %

7.6 %

Bonds 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0

0

1

2

3 4 Maturity dates

5

6

131

7

Figure 4.5 Spot-rate structure

Table 4.2

Price and rate structures at 0

s

R0 (s)

P0 (s)

r(s)

0 1 2 3 4 5 6

0.060 0.066 0.070 0.073 0.075 0.076

1.0000 0.9434 0.8800 0.8163 0.7544 0.6966 0.6444

0.0600 0.0720 0.0780 0.0821 0.0830 0.0810

4.3.2.2 Continuous model If the time set is [0; +∞], we retain the same deﬁnitions and notations for the price structures and spot rates: {P0 (s): s > 0] and {R0 (s) : s > 0}. This last will be an instant rate; after a period s, a total 1 will become, at this rate : es·R0 (s) . We will also note, before taking limits, R0 d (s) being the spot rate for the discrete model (even applied to a non integer period). It is therefore linked to the spot rate for the continuous model by the relation R0 (s) = ln(1 + R0 d (s)). With regard to the term rate, we are provisionally introducing the notation r(t1 , t2 ) to represent the interest rate relative to the period [t1 ;t2 ] and we deﬁne the instant term interest rate by: s 1 r(t) = lim r(t, u) du s→t+ s − t t We can readily obtain, as above: s+s 1 + R0d (s + s) = s

1 + R0d (s)

(1 + r(s, s + s))

s

Thanks to the Taylor formula, this is written: [1 + s.r(s, s + s) + O((s)2 )].(1 + R0d (s))s = (1 + R0d (s + s))s+s

132

Asset and Risk Management

This relation can be rewritten as: s+s s

s

− 1 + R0d (s) 1 + R0d (s + s) d r(s, s + s) · 1 + R0 (s) + O(s) = s After taking the limit, this becomes: s

(1 + R0d (s))s d d r(s) = = ln 1 + R (s) = s · ln 1 + R (s) = [s · R0 (s)] s 0 0 d 1 + R0 (s) This relation, which expresses the spot rate according to the instant term rate, can easily be inverted by integrating: 1 s R0 (s) = r(u) du s 0 It can also be expressed easily by saying that the spot rate for the period [0; s] is the average of the instant term rate for the same period. The price is of course linked to the two rates by the relations: P0 (s) = e−s·R0 (s) = e

−

s 0

r(u) du

Note For a ﬂat rate structure (that is, R0 (s) independent of s), it is easy to see, by developing the relation [s · R0 (s)] = r0 (s), that R0 (s) = r(s) = r for every s and that the price structure is given, P0 (s) = e−rs . 4.3.3 Dynamic interest rate structure The dynamic models examine the structure of the interest rates at any given moment t. They always deal with zero-coupon bonds, issued at 0 and giving rise to a repayment of 1. They may allow the distortions in the rate curve to be taken into account; in fact, we will be studying the link that exists between price and rate structures for the various observation periods. 4.3.3.1 Discrete model The price at the moment t for the bond issued at 0 and maturing at s is termed16 Pt (s). The term Rt (s) is given to the spot rate relative to the interval] t; s]. Finally, the term rate relative to the period] t − 1; t] is termed r(t). 16

It is of course supposed that 0 < t < s.

Bonds

133

Following reasoning similar in every way to that used for the static models, we will readily obtain the relations Pt (s) = (1 + Rt (s))−(s−t) (1 + Rt (s))s−t = (1 + r(t + 1)) · (1 + r(t + 2)) · . . . · (1 + r(s)) This will invert readily to r(t + 1) = Rt (t + 1) (1 + Rt (s))s 1 + r(s) = (1 + Rt (s − 1))s−1

s = t + 2, t + 3, . . .

We also have, between the structure of the prices and that of the interest rates: r(s) =

Pt (s − 1) −1 Pt (s)

(s > t)

The link between the price structures at different observation times is expressed by the following relation: Pt (s) = [(1 + r(t + 1)) · (1 + r(t + 2)) · . . . · (1 + r(s))]−1 (1 + r(t)) · (1 + r(t + 1)) · (1 + r(t + 2)) · . . . · (1 + r(s)) −1 = (1 + r(t)) =

(1 + Rt−1 (s))−(s−t+1) (1 + Rt−1 (t))−1

=

Pt−1 (s) Pt−1 (t)

This result can easily be generalised, whatever u may be, placed between t and s (t ≤ u ≤ s), we have: Pu (s) Pt (s) = Pu (t) From this relation it is possible to deduce a link, which, however, has a rather ungainly expression, between the spot-rate structures at the various times. Example Let us take once again the spot interest-rate structure used in the previous paragraph: 6 %, 6.6 %, 7 %, 7.3 %, 7.5 % and 7.6 % for the respective payment dates at 1, 2, 3, 4, 5 and 6 years. Let us see what happens to the structure after two years. We can ﬁnd easily: P2 (5) =

P0 (5) 0.69656 = = 0.7915 P0 (2) 0.88001 −1

R2 (5) = P2 (5) 5−2 − 1 = 0.7915−1/3 − 1 = 0.0810

134

Asset and Risk Management Table 4.3

Price and rate structures at 2

s

P2 (s)

R2 (s)

2 3 4 5 6

1.0000 0.9276 0.8573 0.7915 0.7322

0.0780 0.0800 0.0810 0.0810

and more generally as shown in Table 4.3. Note that we have: r(5) =

P0 (4) P2 (4) −1= − 1 = 0.0830 P0 (5) P2 (5)

4.3.3.2 Continuous model The prices Pt (s) and the spot rates Rt (s) are deﬁned as for the static models, but with an observation at moment t instead of 0. The instant term rates r(t) are deﬁned in the same way. It can easily be seen that the relations that link the two are:

r(s) = [(s − t) · Rt (s)] s s 1 Rt (s) = r(u) du s −t t

∀t

Meanwhile, the relations that link rates to prices are given by: s − r(u) du Pt (s) = e−(s−t)·Rt (s) = e t 4.3.4 Deterministic model and stochastic model The relations mentioned above have been established in a deterministic context. Among other things, the short instant rate and the term rate have been assimilated. More generally (stochastic model), the following distinction should be made. 1. The instant term rate, deﬁned by: r(t) = lim Rt (s). s→t+ 2. The instant term or forward rate, deﬁned as follows: if ft (s1 , s2 ) represents the rate of interest seen since time t for a bond issued at s1 and with maturity at s2 , the forward rate (in s seen from t, with t < s) is: ft (s) = lim ft (s, u). u→s+

In a general model, this forward rate must be used to ﬁnd the price and spot-rate structures: s − f (u) du Pt (s) = e t t s 1 Rt (s) = ft (u) du s −t t

Bonds

135

It can easily be seen that these two rates (instant term and forward) are linked by the relation r(t) = ft (t). It can be demonstrated that in the deterministic case, ft (s) is independent of t and the two rates can therefore be identiﬁed: ft (s) = r(s). It is therefore only in this context that we have: s − r(u) du Pt (s) = e t s 1 Rt (s) = r(u) du s−t t Pt (s) =

Pu (s) Pu (t)

4.4 BOND PORTFOLIO MANAGEMENT STRATEGIES 4.4.1 Passive strategy: immunisation The aim of passive management is to neutralise the portfolio risk caused by ﬂuctuations in interest rates. 4.4.1.1 Duration and convexity of portfolio Let us consider a bond portfolio consisting at moment 0 of N securities (j = 1, . . . , N ), each characterised by: • • • • •

a maturity (residual life) Tj ; coupons yet to come Cj , t (t = 1, . . . , Tj ); a repayment value Rj ; an actuarial rate on issue rj ; a price Pj .

The highest of the maturity values Tj will be termed T , and Fj,t the ﬁnancial ﬂow generated by the security j at the moment t: Fj,t

Cj = CTj + Rj 0

if t < Tj if t = Tj if t > Tj

The duration of the j th security is given by Tj

Dj =

T

t · Cj,t (1 + rj )−t + Tj · Rj (1 + rj )−Tj

t=1

=

Tj

t=1

Cj,t (1 + rj )−t + Rj (1 + rj )−Tj

t · Fj,t (1 + rj )−t

t=1

Pj

136

Asset and Risk Management

Finally, let us suppose that the j th security is present within the portfolio in the number nj . The discounted ﬁnancial ﬂow generated by the portfolio at moment t totals: N

nj Fj,t (1 + rj )−t

j =1

Its price totals: N j =1 nj Pj . The duration of the portfolio can therefore be written as: T

DP =

t=1

t·

N

nj Fj,t (1 + rj )−t

j =1 N

nk Pk

k=1

=

N j =1

nj N

nk Pk

Tj

t · Fj,t (1 + rj )−t

t=1

k=1 Tj

=

N j =1

nj Pj N

·

t · Fj,t (1 + rj )−t

t=1

Pj

nk Pk

k=1

=

N

Xj Dj

j =1

nj Pj Where: Xj = N represents the proportion of the j th security within the portfolio, n P k k k=1 expressed in terms of capitalisation. The same reasoning will reveal the convexity of the portfolio: CP =

N

Xj Cj

j =1

4.4.1.2 Immunising a portfolio A portfolio is said to be immunised at horizon H if its value at that date is at least the value that it would have had if interest rates had remained constant during the period [0; H ]. By applying the result arrived at in Section 4.2.2 for a bond in the portfolio, we obtain the same result: a bond portfolio is immunised at a horizon that corresponds to its duration.

Bonds

137

Of course, whenever the interest rate changes, the residual duration varies suddenly. A careful bond portfolio manager wishing to immunise his portfolio for a horizon H that he has ﬁxed must therefore: • Put together a portfolio with duration H . • After each (signiﬁcant) interest rate change, alter the composition of the portfolio by making sales and purchases (that is, alter the proportions of Xj ) so that the residual duration can be ‘pursued’. Of course these alterations to the portfolio composition will incur transaction charges, which should be taken into consideration and balanced against the beneﬁts supplied by the immunising strategy. Note It was stated in Section 4.2.3 that of two bonds that present the same return (actuarial rate) and duration, the one with the higher convexity will be of greater interest. This result remains valid for a portfolio, and the manager must therefore take it into consideration whenever revising his portfolio. 4.4.2 Active strategy The aim of active management is to obtain a return higher than that produced by immunisation, that is, higher than the actuarial return rate on issue. In the case of increasing rates (the commonest case), when the rate curve remains unchanged over time, the technique is to purchase securities with a higher maturity than the investment horizon and to sell them before their maturity date.17 Example Let us take once again the rate structure shown in the previous section (Table 4.4). Let us suppose that the investor ﬁxes a two-year horizon. If he simply purchases a security with maturity in two years, he will simply obtain an annual return of 6.6 %. In addition, the return over two years can be calculated by √ 1 − 0.8800 = 0.1364 and 1.1364 = 1.066 0.8800 Table 4.4

Price and rate structures

S

R0 (s)

P0 (s)

r(s)

0 1 2 3 4 5 6

0.060 0.066 0.070 0.073 0.075 0.076

1.0000 0.9434 0.8800 0.8163 0.7544 0.6966 0.6444

0.06000 0.07203 0.07805 0.08205 0.08304 0.08101

17 If the rate curve is ﬂat and remains ﬂat, the strategy presented will produce the same return as the purchase of a security with a maturity equivalent to the investment horizon.

138

Asset and Risk Management

If he purchases a security with maturity in ﬁve years (at a price of 0.6966) and sells it on after two years (at the three-year security price if the rate curves remain unchanged, that is 0.8163), he will realise a total return of 0.8163 − 0.6966 = 1.1719 0.6966

√ This will give 1.1719 = 1.0825, that is, an annual return of 8.25 %, which is of considerably greater interest than the return (6.6 %) obtained with the two-year security. Note that we have an interpretation of the term rate here, as the total return for the period [3; 5], effectively used, is given by (1 + r(4)) · (1 + r(5)) = 1.0821 · 1.0830 = 1.1719. The interest rate obtained using this technique assumes that the rate curve remains unchanged over time. If, however the curve, and more speciﬁcally the spot rate used to calculate the resale price, ﬂuctuates, the investor will be exposed to the interest-rate ﬂuctuation risk. This ﬂuctuation will be favourable (unfavourable) to him if the rate in question falls (rises). In this case, the investor will have to choose between a safe return and a higher but potentially more risky return. Example With the same information, if after the purchase of a security with maturity in ﬁve years the spot rate for the three-year security shifts from 7.6 % to 8 %, the price of that security will fall from 0.8163 to 0.7938 and the return over the two years will be 0.7938 − 0.6966 = 1.1396 0.6966

We therefore have

√ 1.1396 = 1.0675, which corresponds to an annual return of 6.75 %.

4.5 STOCHASTIC BOND DYNAMIC MODELS The models presented here are actually generalisations of the deterministic interest-rate structures. The aim is to produce relations that govern changes in price Pt (s) and spot rates Rt (s). There are two main categories of these models: distortion models and arbitrage models. The distortion models examine the changes in the price Pt (s) when the interest-rate structure is subject to distortion. A simple model is that of Ho and Lee,18 in which the distortion of the rate curve shows in two possible movements in each period; it is therefore a binomial discrete type of model. A more developed model is the Heath, Jarrow and Morton model,19 which has a discrete and a continuous version and in which the distortions to the rate curve are more complex. The arbitrage models involve the compilation, and where possible the resolution, of an equation with partial derivatives for the price Pt (s, v1 , v2 , . . .) considered as a function of t, v1 , v2 , . . . (s ﬁxed), using: 18 Ho T. and Lee S., Term structure movement and pricing interest rate contingent claims, Journal of Finance, Vol. 41, No. 5., 1986, pp. 1011–29. 19 Heath D., Jarrow R. and Morton A., Bond Pricing and the Term Structure of Interest Rates: a New Methodology, Cornell University, 1987. Heath D., Jarrow R. and Morton A., Bond pricing and the term structure of interest rates: discrete time approximation, Journal of Financial and Quantitative Analysis, Vol. 25, 1990, pp. 419–40.

Bonds

139

• the absence of arbitrage opportunity; • hypotheses relating to stochastic processes that govern the evolutions in the state variables v1 , v2 etc. The commonest of the models with just one state variable are the Merton model,20 the Vasicek model21 and the Cox, Ingersoll and Ross model;22 all these use the instant term rate r(t) as the state variable. The models with two state variables include: • The Brennan and Schwarz model,23 which uses the instant term rate r and the long rate l as variables. • The Nelson and Schaefer model24 and the Schaefer and Schwartz model,25 for which the state variables are the long rate l and the spread s = l − r. • The Richard model,26 which uses the instant term rate and the rate of inﬂation. • The Ramaswamy and Sundaresan model,27 which takes the instant market price of risk linked to the risk of default alongside the instant term rate. In this section we will be dealing with only the simplest of arbitrage models: after a general introduction to the principle of these models (Section 4.5.1), we will examine in succession the Vasicek model (Section 4.5.2) and the Cox, Ingersoll and Ross model28 (Section 4.5.3). Finally, in Section 4.5.4, we will deal with the concept of ‘stochastic duration’. 4.5.1 Arbitrage models with one state variable 4.5.1.1 General principle It is once again stated (see Section 4.3) that the stochastic processes of interest to us here are: • The price Pt (s) in t of a zero-coupon bond (unit repayment value) maturing at the moment s (with t < s). The spot rate Rt (s), linked to the price by the relation Pt (s) = e−(s−t)Rt (s) 20 Merton R., Theory of rational option pricing, Bell Journal of Economics and Management Science, Vol. 4, No. 1, 1973, pp. 141–83. 21 Vasicek O., An equilibrium characterisation of the term structure, Journal of Financial Economics, Vol. 5, No. 2, 1977, pp. 177–88. 22 Cox K., Ingersoll J. and Ross J., A theory of the term structure of interest rates, Econometrica, Vol. 53, No. 2, 1985, pp. 385–406. 23 Brennan M. and Schwartz E., A continuous time approach to the pricing of bonds, Journal of Banking and Finance, Vol. 3, No. 2, 1979, pp. 133–55. 24 Nelson J. and Schaefer S., The dynamics of the term structure and alternative portfolio immunization strategies, in Bierwag D., Kayfman G. and Toevs A., Innovations in Bond Portfolio Management: Duration Analysis and Immunization, JAI Press, 1983. 25 Schaefer S. and Schwartz E., A two-factor model of the term structure: an approximate analytical solution, Journal of Financial and Quantitative Analysis, Vol. 19, No. 4, 1984, pp. 413–24. 26 Richard S., An arbitrage model of the term structure of interest rates, Journal of Financial Economics, Vol. 6, No. 1, 1978, pp. 33–57. 27 Ramaswamy K. and Sundaresan M., The valuation of ﬂoating-rate instruments: theory and evidence, Journal of Financial Economics, Vol. 17, No. 2, 1986, pp. 251–72. 28 The attached CD-ROM contains a series of Excel ﬁles that show simulations of these stochastic processes and its rate curves for the various models, combined together in the ‘Ch4’ ﬁle.

140

Asset and Risk Management

• The instant term rate, which we will refer hereafter as rt 29 or r if there is no risk of confusion, and which is the instant rate at moment t, being written as 1 s→t+ s − t

rt = lim Rt (s) = lim s→t+

s

t

ft (u) du

It is this instant term rate that will be the state variable. The price and spot rate will be written as Pt (s, r) and Rt (s, r) and will be considered as functions of the variables t and r alone, the maturity date s being ﬁxed. In addition, it is assumed that these expressions are random via the intermediary of rt only. It is assumed here that the changes in the state variable rt are governed by the general stochastic differential equation30 drt = a(t, rt ) dt + b(t, rt ) dwt , where the coefﬁcients a and b respectively represent the average instant return of the instant term rate and the volatility of that rate, and wt is the standard Brownian motion. Applying the Itˆo formula to the function Pt (s, rt ) leads to the following, with simpliﬁed notations: dPt (s, rt ) = (Pt + Pr a + 12 Prr b2 ) · dt + Pr b · dwt = Pt (s, rt ) · µt (s, rt ) · dt − Pt (s, rt ) · σt (s, rt ) · dwt Here, we have:

P + Pr a + 12 Prr b2 µt = t P σ = − Pr b t P

(Note that σt > 0 as Pr < 0). The expression µt (s, rt ) is generally termed the average instant return of the bond. Let us now consider two ﬁxed maturity dates s1 and s2 (> t) and apply an arbitrage reasoning by putting together, at the moment t, a portfolio consisting of: • The issue of a bond with maturity date s1 . • The purchase of X bonds with maturity date s2 . The X is chosen so that the portfolio does not contain any random components; the term involving dwt therefore has to disappear. The value of this portfolio at moment t is given by Vt = −Pt (s1 ) + XPt (s2 ), and the hypothesis of absence of opportunity for arbitrage allows us to express that the average return on this portfolio over the interval [t; t + dt] is given by the instant term rate rt : dVt = rt · dt + 0 · dwt Vt 29 30

Instead of r(t) as in Section 4.3, for ease of notation. See Appendix 2.

Bonds

141

By differentiating the value of the portfolio, we have: dVt = −Pt (s1 )(µt (s1 ) dt − σt (s1 ) dwt ) + X · Pt (s2 )(µt (s2 ) dt − σt (s2 ) dwt ) = [−Pt (s1 )µt (s1 ) + XPt (s2 )µt (s2 )] · dt + [Pt (s1 )σt (s1 ) − XPt (s2 )σt (s2 )] · dwt The arbitrage logic will therefore lead us to −Pt (s1 )µt (s1 ) + XPt (s2 )µt (s2 ) = rt −P (s ) + XP (s ) t

t

1

2

P (s )σ (s ) − XPt (s2 )σt (s2 ) t 1 t 1 =0 −Pt (s1 ) + XPt (s2 ) In other words:

XPt (s2 ) · (µt (s2 ) − rt ) = Pt (s1 ) · (µt (s1 ) − rt ) XPt (s2 ) · σt (s2 ) = Pt (s1 ) · σt (s1 )

We can eliminate X, for example by dividing the two equations member by member, which gives: µt (s1 ) − rt µt (s2 ) − rt = σt (s1 ) σt (s2 ) This shows that the expression λt (rt ) =

µt (s) − rt is independent of s; this expression σt (s)

is known as the market price of the risk. By replacing µt and σt with their value in the preceding relation, we arrive at Pt + (a + λb)Pr +

b2 P − rP = 0 2 rr

What we are looking at here is the partial derivatives equation of the second order, which together with the initial condition Ps (s, rt ) = l, deﬁnes the price process. This equation must be resolved for each speciﬁcation of a(t, rt ), b(t, rt ) and λt (rt ). 4.5.1.2 The Merton model31 Because of its historical interest,32 we are showing the simplest model, the Merton model. This model assumes that the instant term rate follows a random walk model: drt = α · dt + σ · dwt with α and σ being constant and the market price of risk being zero (λ = 0). The partial derivatives equation for the prices takes the form: Pt + αPr +

σ 2 P − rP = 0. 2 rr

31 Merton R., Theory of rational option pricing, Bell Journal of Economics and Management Science, Vol. 4, No. 1, 1973, pp. 141–83. 32 This is in fact the ﬁrst model based on representation of changes in the spot rate using a stochastic differential equation.

142

Asset and Risk Management

It is easy to verify that the solution to this equation (with the initial condition) is given by

α σ2 Pt (s, rt ) = exp −(s − t)rt − (s − t)2 + (s − t)3 2 6

The average instant return rate is given by

µt (s, rt ) =

Pt + αPr + P

σ 2 P 2 rr = rt · P = r t P

which shows that in this case, the average return is independent of the maturity date. The spot rate totals: Rt (s, rt ) = −

1 ln Pt (s, rt ) s−t

= rt +

σ2 α (s − t) − (s − t)2 2 6

This expression shows that the spot rate is close to the instant term rate in the short term, which is logical, but also (because of the third term) that it will invariably ﬁnish as a negative for distant maturity dates; this is much less logical. Note If one generalises the Merton model where the market price of risk λ is a strictly positive constant, we arrive at an average return µt that grows with the maturity date, but the inconvenience of the Rt spot rate remains. The Merton model, which is unrealistic, has now been replaced by models that are closer to reality; these models are covered in the next two paragraphs. 4.5.2 The Vasicek model33 In this model, the state variable rt develops according to an Ornstein–Uhlenbeck process drt = δ(θ − rt ) · dt + σ · dwt in which the parameters δ, θ and σ are strictly positive constants and the rate risk unit premium is also a strictly positive constant λt (rt ) = λ > 0. The essential property of the Ornstein–Uhlenbeck process is that the variable rt is ‘recalled’ back towards θ if it moves too far away and that δ represents the ‘force of recall’. Example Figure 4.6 shows a simulated trajectory (evolution over time) for such a process over 1000 very short time periods with the values δ = 100, θ = 0.1 and σ = 0.8 with a start value for r0 of 10 %. 33 Vasicek O., An equilibrium characterisation of the term structure, Journal of Financial Economics, Vol. 5, No. 2, 1977, pp. 177–88.

Bonds

143

0.3 0.25 0.2 0.15 0.1 0.05 0

1

101

201

301

401

501

601

701

801

901

1001

–0.05 –0.1 –0.15 –0.2

Figure 4.6 Ornstein–Uhlenbeck process

The partial derivatives equation for the price is shown here: Pt + (δ(θ − r) + λσ )Pr +

σ 2 P − rP = 0 2 rr

The solution to this equation and its initial condition is given by

k − rt σ2 Pt (s, rt ) = exp −k(s − t) + (1 − e−δ(s−t) ) − 3 (1 − e−δ(s−t) )2 δ 4δ where we have: k=θ+

σ2 λσ − 2 δ 2δ

The average instant return rate is given by:

µt (s, rt ) =

Pt + δ(θ − rt )Pr +

σ 2 P 2 rr

P rt · P − λσ · Pr = P λσ (1 − e−δ(s−t) ) = rt + δ

This average return increases depending on the maturity date, and presents a horizontal asymptote in the long term (Figure 4.7). The spot rate is given by: Rt (s, rt ) = −

1 ln Pt (s, rt ) s−t

=k−

σ2 k − rt (1 − e−δ(s−t) ) + 3 (1 − e−δ(s−t) )2 δ(s − t) 4δ (s − t)

144

Asset and Risk Management µt(s, rt) rt + λσ/δ

rt

t

s

Figure 4.7 The Vasicek model: average instant return

On one hand, this expression shows that the spot rate is stabilised for distant maturity dates and regardless of the initial value of the spot rate: lim Rt (s, rt ) = k. (s−t)−→+∞

On the other hand, depending on the current value of the spot rate in relation to the parameters, we can use this model to represent various movements of the yield curve. Depending on whether rt belongs to the

σ2 0; k − 2 , 4δ

σ2 σ2 k − 2;k + 2 , 4δ 2δ

σ2 k + 2 ; +∞ 2δ

we will obtain a rate curve that is increasing, humped or decreasing. Example Figure 4.8 shows spot-rate curves produced using the Vasicek model for the following parameter values: δ = 0.2, θ = 0.08, σ = 0.05 and λ = 0.02. The three curves correspond, from bottom to top, to r0 = 2 %, r0 = 6 % and r0 = 10 %. The Vasicek model, however, has two major inconvenients. On one hand, the Ornstein–Uhlenbeck process drt = δ(θ − rt ) · dt + σ · dwt , on the basis of which is constructed, sometimes, because of the second term, allows the instant term rate to assume negative values. On the other hand, the function of the spot rate that it generated, Rt (s), may in some case also assume negative values. 0.12 0.1 0.08 0.06 0.04 0.02 0 1

11

Figure 4.8 The Vasicek model: yield curves

21

31

41

51

Bonds

145

4.5.3 The Cox, Ingersoll and Ross model34 This model is part of a group also known as the equilibrium models as they are based on a macroeconomic type of reasoning, based in turn on the hypothesis that the consumer will show behaviour consistently aimed at maximising expected utility. These considerations, which we will not detail, lead (as do the other arbitrage models) to a speciﬁc deﬁnition of the stochastic process that governs the evolution of the instant term rate rt as well as the market price of risk λt (rt ). If within the Ornstein–Uhlenbeck process the second term is modiﬁed to produce drt = δ(θ − rt ) · dt + σ r α t · dwt , with α > 0, we will avoid the inconvenience mentioned earlier: the instant term rate can no longer become negative. In fact, as soon as it reaches zero, only the ﬁrst term will subsist and the variation in rates must therefore necessarily be upwards; the horizontal axis then operates as a ‘repulsing barrier’. Using the macroeconomic reasoning on which the Cox, Ingersoll and Ross model is based, we have a situation where α = 1/2 and the stochastic process is known as the square root process: √ drt = δ(θ − rt ) · dt + σ rt · dwt γ√ The same reasoning leads to a rate risk unit premium given by λt (rt ) = rt , where σ γ is a strictly positive constant; the market price of risk therefore increases together with the instant term rate in this case. Example Figure 4.9 represents a square root process with the parameters δ = 100, θ = 0.1 and σ = 0.8. The partial derivative equation for the price is given by Pt + (δ(θ − r) + γ r)Pr +

σ 2 rP − rP = 0 2 rr

0.16 0.14 0.12 0.1 0.08 0.06 0.04 0.02 0

1

101

201

301

401

501

601

701

801

901

1001

Figure 4.9 Square root process 34 Cox J., Ingersoll J. and Ross J., A theory of the term structure of interest rates, Econometrica, Vol. 53, No. 2, 1985, pp. 385–406.

146

Asset and Risk Management

The solution to this equation and its initial condition is given by Pt (s, rt ) = xt (s) · e−yt (s)r where we have:

2δθ2 1 σ (δ−γ +k)(s−t) 2ke 2 xt (s) = zt (s) 2(ek(s−t) − 1) y (s) = t zt (s) zt (s) = 2k + (δ − γ + k)(ek(s−t) − 1) k = (δ − γ )2 + 2σ 2

The average instant rate return is given by:

µt (s, rt ) =

Pt + δ(θ − rt )Pr +

P rt · P − γ rt · Pr = P = rt (1 + γ yt (s) )

σ 2 rP 2 rr

In this case, the average rate of return is proportional to the instant term rate. Finally, the spot rate is given by: 1 ln Pt (s, rt ) s−t 1 =− (ln xt (s) − rt yt (s)) s−t

Rt (s, rt ) = −

Example Figure 4.10 shows the spot-rate curves produced using the Cox, Ingersoll and Ross model for the following parameter values: δ = 0.2, θ = 0.08, σ = 0.05 and γ = 0.02. The three curves, from bottom to top, correspond to r0 = 2 %, r0 = 6 % and r0 = 10 %. 0.12 0.1 0.08 0.06 0.04 0.02 0 0

10

20

30

40

Figure 4.10 The Cox, Ingersoll and Ross model: rate curves

50

60

Bonds

147

Finally, we should point out that in contrast to the Vasicek model, the Cox, Ingersoll and Ross model can never produce a negative spot rate. In addition, as with the Vasicek model, the spot rate stabilises for distant maturity dates regardless of the initial value of the spot rate: 2δθ lim Rt (s, rt ) = (s−t)→+∞ δ−γ +k 4.5.4 Stochastic duration Finally, to end this section dedicated to random models, we turn to a generalisation of the concept of duration. Duration and convexity of rate products are ongoing techniques used to assess the sensitivity and alteration of the price of an asset following an alteration to its rate. Duration allows the variation in value to be estimated for more signiﬁcant variations. These concepts are used not only in bond portfolio management but also in asset and liability management in the context of immunisation of interest margins. What happens is that part of the balance-sheet margin is produced by a spread between the interest paid on assets (longterm deposits) and interest received on assets (the bank’s own portfolio with a ﬁxed income). This margin is immunised against variations in rate if convexity and duration are identical on both sides of the balance sheet. This identity of duration and convexity, also known as mutual support, does not necessarily mean that cash ﬂows are identical in assets and liabilities. In this case, distortion of the rate curve could lead to non-identical alterations in asset and liability values. The transition from a deterministic rate curve to a stochastic rate model provides the solution to this problem. The random evolution of rates allows the stochastic duration to be calculated. There are several stochastic rate models (see Sections 4.5.1–4.5.3), but the type most frequently used in ﬁnancial literature is the classical Vasicek model. 4.5.4.1 Random evolution of rates The classical Vasicek model is based on changes in the instant term rate governed by an Ornstein–Uhlenbeck process: drt = δ(θ − rt ) · dt + σ · dwt . The forward long rate r+∞f w is of course a function of the parameters δ, θ and σ of the model. Variations in a bond’s price depend on the values taken by the random variable rt and the alterations to the model’s parameters. The natural way of approaching stochastic duration is to adjust the parameters econometrically on rate curves observed. 4.5.4.2 Principle of mutual support The total variation in value at the initial moment t is obtained by developing Taylor in the ﬁrst order: dVt (s, rt , r+∞f w , σ ) = Vrt drt + Vr+∞f w dr+∞f w + Vσ dσ The principle of mutual support between assets and liabilities requires two restrictions to be respected: ﬁrst, equality of values between assets and liabilities: VA,t (s, rt , r+∞f w , σ ) = VL,t (s, rt , r+∞f w , σ )

148

Asset and Risk Management

and second, equality of total variations: regardless of what the increases in drt , dr+∞f w and dσ may be, we have dVA,t (s, rt , r+∞f w , σ ) = dVL,t (s, rt , r+∞f w , σ ) This second condition therefore requires that: (s, rt , r+∞f w , σ ) = VL,r (s, rt , r+∞f w , σ ) VA,r t t VA,r (s, rt , r+∞f w , σ ) = VL,r (s, rt , r+∞f w , σ ) +∞f w +∞f w VA,σ (s, rt , r+∞f w , σ ) = VL,σ (s, rt , r+∞f w , σ )

4.5.4.3 Extension of the concept of duration Generally speaking, it is possible to deﬁne the duration D that is a function of the variation in long and short rates: Dt (s, rt , r+∞f w ) =

1 (V + Vr+∞f w ) 2V rt

This expression allows us to ﬁnd the standard duration when the rate curve is deterministic and tends towards a constant curve with σ = 0 and θ = rt , with the initial instant term rate for the period t: 1 Dt (s, rt , r+∞f w ) = · Vrt V Generally, the duration is sensitive to the spread S between the short rate and the long rate. Sensitivity to spread allows the variation in value to be calculated for a spread variation: St (s, rt , r+∞f w ) =

1 1 (−Vrt + Vr+∞f w ) = · Vrt 2V V

In this case, if s is stable and considered to be a constant, the mutual support will correspond to the equality of stochastic distribution and of sensitivity of spread for assets and liabilities. The equality dVA,t (s, rt , r+∞f w ) = dVL,t (s, rt , r+∞f w ), valid whatever the increases in drt and dr+∞f w is equivalent to: DA = DL SA = SL

5 Options

5.1 DEFINITIONS 5.1.1 Characteristics An option 1 is a contract that confers on its purchaser, in return for a premium, the right to purchase or sell an asset (the underlying asset) on a future date at a price determined in advance (the exercise price of the option). Options for purchasing and options for selling are known respectively as call and put options. The range of assets to which options contracts can be applied is very wide: ordinary equities, bonds, exchange rates, commodities and even some derivative products such as FRAs, futures, swaps or options. An option always represents a right for the holder and an obligation to buy or sell for the issuer. This option right may be exercised when the contract expires (a European option) or on any date up to and including the expiry date (an American option). The holder of a call option will therefore exercise his option right if the price of the underlying equity exceeds the exercise price of the option or strike; conversely, a put option will be exercised in the opposite case. The assets studied in the two preceding chapters clearly show a degree of random behaviour (mean-variance theory for equities, interest-rate models for bonds). They do, however, also allow deterministic approaches (Gordon-Shapiro formula, duration and convexity). With options, the random aspect is much more intrinsic as everything depends on a decision linked to a future event. This type of contract can be a source of proﬁt (with risks linked to speculation) and a means of hedging. In this context, we will limit our discussion to European call options. Purchasing this type of option may lead to an attractive return, as when the price of the underlying equity on maturity is lower than the exercise price, the option will not be exercised and the loss will be limited to the price of the option (the premium). When the price of the underlying equity on maturity is higher than the exercise price, the underlying equity is received for a price lower than its value. The sale (issue) of an equity option, on the other hand, is a much more speculative operation. The proﬁt will be limited to the premium if the price of the underlying equity remains lower than the exercise price, while considerable losses may arise if the rise is higher than the price of the underlying equity. This operation should therefore only be envisaged if the issuer has absolute conﬁdence in a fall (or at worst a reduced rise) in the price of the underlying equity. Example Let us consider a call option on an equity with a current price of 100, a premium of 3 and an exercise price of 105. We will calculate the proﬁt made (or the loss suffered) 1 Colmant B. and Kleynen G., Gestion du risque de taux d’int´erˆet et instruments ﬁnanciers d´eriv´es, Kluwer, 1995. Hull J. C., Options, Futures and Other Derivatives, Prentice Hall, 1997. Hicks A., Foreign Exchange Options, Woodhead, 1993.

150

Asset and Risk Management Table 5.1 Proﬁt on option according to price of underlying equity Price of underlying equity

90 95 100 105 106 107 108 109 110 115 120

Gain Purchaser

Issuer

−3 −3 −3 −3 −2 −1 0 1 2 7 12

3 3 3 3 2 1 0 −1 −2 −7 −12

by the purchaser and by the issuer of the contract according to the price reached by the underlying equity on maturity. See Table 5.1. Of course, issuers who notice the price of the underlying equity rising during the contractual period can partly protect themselves by purchasing the same option and thus closing their position. Nevertheless, because of the higher underlying equity price, the premium for the option purchased may be considerably higher than that of the option that was issued. The price (premium) of an option depends on several different factors: • • • • •

the the the the the

price of the underlying equity St at the moment t (known as the spot); exercise price of the option K (known as the strike); duration T − t remaining until the option matures;2 volatility σR of the return on the underlying equity; risk-free rate RF .3

The various ways of specifying the function f (which will be termed C or P depending on whether a call or a put is involved) give rise to what are termed models of valuation. These are dealt with in Section 5.3. 5.1.2 Use The example shown in the preceding paragraph corresponds to the situation in which the purchaser can hope for an attractive gain. The proﬁt realised is shown in graphic form in Figure 5.1. This residual duration T − t is frequently referred to simply as τ This description corresponds, for example, to an option on an equity (with which we will mostly be dealing in this chapter). For an exchange option, the rate of interest will be divided in two, into domestic currency and foreign currency. In addition, this rate, which will be used in making updates, may be considered either discretely (discounting factor (1 + RF )−t ) or continuously (the notation r: e−rt will then be used). 2 3

Options

151

Profit

K ST

–p

Figure 5.1 Acquisition of a call option

Alongside this speculative aspect, the issue of a call option can become attractive if it is held along with the underlying equity. In fact, if the underlying equity price falls (or rises little), the loss suffered on that equity will be partly offset by receipt of the premium, whereas if the price rises greatly, the proﬁt that would have been realised will be limited to the price of the option plus the differential between the exercise price and the price of the underlying equity at the start of the contract. Example 1 Following the example shown above, we calculated the proﬁt realised (or the loss suffered) when the underlying equity alone is held and when it is covered by the call option (Table 5.2). Example 2 Let us now look at a more realistic example. A European company X often has invoices expressed in US dollars payable on delivery. The prices are of course ﬁxed at the moment of purchase (long before the delivery). If the rate for the dollar rises between the moment of purchase and the moment of delivery, the company X will suffer a loss if it purchases its dollars at the moment of payment. Table 5.2 option

Proﬁt/loss on equity covered by call

Price of underlying equity

90 95 100 105 106 107 108 109 110 115 120

Proﬁt/loss Purchaser

Issuer

−3 −3 −3 −3 −2 −1 0 1 2 7 12

3 3 3 3 2 1 0 −1 −2 −7 −12

152

Asset and Risk Management

Let us assume, more speciﬁcally, that the rate for the dollar at the moment t is St (US$1 = ¤St ) and that X purchases goods on this day (t = 0) valued at US$1000, the rate being S0 = x (US$1 = ¤x), for delivery in t = T . The company X, on t = 0, acquires 1000 European US$/¤ calls maturing on T , the exercise price being K = ¤x for US$1. If ST > x, the option will be exercised and X will purchase its dollars at rate x (the rate in which the invoice is expressed) and the company will lose only the total of the premium. If ST ≤ X, the option will not be exercised and X will purchase its dollars at the rate ST and the business will realise a proﬁt of 1000 · (x − ST ) less the premium. The purchase of the option acts as insurance cover against changes in rates. Of course it cannot be free of charge (consider the point of view of the option issuer); its price is the option premium. The case envisaged above corresponds to the acquisition of a call option. The same kind of reasoning can be applied to four situations, corresponding to the purchase or issue of a call option on one hand or of a put option on the other hand. Hence we have Figures 5.2 and 5.3. In addition to the simple cover strategy set out above, it is possible to create more complex combinations of subsequent equity, call options and put options. These more involved strategies are covered in Section 5.4.

Profit

p ST

K

Figure 5.2 Issue of a call option

Profit

Profit

p K ST –p

Figure 5.3 Acquisition and issue of a put option

K

ST

Options

153

5.2 VALUE OF AN OPTION 5.2.1 Intrinsic value and time value An option premium can be spilt into two terms: its intrinsic value and its time value. The intrinsic value of an option at a moment t is simply the proﬁt realised by the purchaser (without taking account of the premium) if the option was exercised at t. More speciﬁcally, for a call option it is the difference, if that difference is positive,4 between the price of the underlying equity St at that moment and the exercise price5 K of the option. If the difference is negative, the intrinsic value is by deﬁnition 0. For a put option, the intrinsic value will be the difference between the exercise price and the underlying equity price.6 Therefore, if the intrinsic value of the option is termed VI, we will have VI t = max (0, St − K) = (St − K)+ for a call option and VI t = max (0, K − St ) = (K − St )+ for a put option, with the graphs shown in Figure 5.4. The price of the option is of course at least equal to its intrinsic value. The part of the premium over and above the intrinsic value is termed time value and shown as VT, hence: V Tt = pt − VI t . This time value, which is added to the intrinsic value to give the premium, represents payment in anticipation of an additional proﬁt for the purchaser. From the point of view of the issuer, it therefore represents a kind of risk premium. The time value will of course decrease as the time left to run decreases, and ends by being cancelled out at the maturity date (see Figure 5.5). VIt (call)

VIt (put)

K

St

K

St

Figure 5.4 Intrinsic value of a call option and put option

VTt

T

t

Figure 5.5 Time value according to time 4 The option is then said to be ‘in the money’. If the difference is negative, the option is said to be ‘out of the money’. If the subjacent share price is equal or close to the exercise price, it is said to be ‘at the money’. These deﬁnitions are inverted for put options. 5 The option cannot in fact be exercised immediately unless it is of the American type. For a European option, the exercise price should normally be discounted for the period remaining until the maturity date. 6 This deﬁnition is given for an American option. For a European option, it is sufﬁcient, within the interpretation of St , to replace the price at the moment t by the maturity date price.

154

Asset and Risk Management VT

K

S

Figure 5.6 Time value according to underlying equity price

p OTM

ATM

ITM

VT VI K

S

Figure 5.7 Splitting of call option premium

It is easy to see, as the other parameters are constant, that the time value will be greater as the underlying equity price comes near to the exercise price, as shown in Figure 5.6. To understand this property, let us view things from the call issuer’s point of view to lay down the ideas. If the option is out of the money, it will probably not be exercised and the issuer may dispense with acquiring the underlying equity; his risk (steep rise in the underlying equity price) will therefore be low and he will receive very little reward. In the same way, an in-the-money option will probably be exercised, and the issuer will therefore have an interest in acquiring the underlying equity; a sharp drop in the underlying equity price represents a highly improbable risk and the time value will also be low. Conversely, for an at-the-money option the issuer will have no degree of certainty with regard to whether or not the option should be exercised, or how the underlying equity price will develop; the risk of the underlying equity price falling after he acquires the equity (or of a price surge without the underlying equity being acquired) is therefore high and a risk premium will be requested in consequence. This phenomenon is shown in Figure 5.7. In addition, it is evident that the longer the period remaining until the option contract matures, the higher the risk and the greater the time value (see Figure 5.8). Of course, the value of an option at maturity is identical to its intrinsic value: CT = (ST − K)+ PT = (K − ST )+ 5.2.2 Volatility Of the parameters that deﬁne the price of an option, let us now look more speciﬁcally at the volatility σR of the return of the underlying equity. The volatility of an option

Options

155

p

(a)

(b) K

S

Figure 5.8 Call premium and high (a) and brief (b) maturity

is deﬁned as a measurement of the dispersion of the return of the underlying equity. In practice, it is generally taken for a reference period of one year and expressed as a percentage. This concept of volatility can be seen from two points of view: historical volatility and implied volatility. Historical volatility is simply the annualised standard deviation on the underlying equity return, obtained from daily observations of the return in the past: n 1 σR = J · (Rt − R)2 n t=1 Here, the factor J represents the number of working days in the year; n is the number of observations and Rt is the return on the underlying equity. It is easy to calculate, but the major problem is that it is always ‘turned towards the past’ when it really needs to help analyse future developments in the option price. For this reason, the concept of implied volatility has been introduced. This involves using a valuation model to estimate the dispersion of the return of the underlying equity for the period remaining until the contract matures. The value of the option premium is determined in practice by the law of supply and demand. In addition, this law is linked to various factors through a binomial model of valuation: pt = f (St , K, T − t, σR , RF ) or through Black and Scholes (see Section 5.3). The resolution of this relation with respect to σR deﬁnes the implied volatility. Although the access is more complicated, this concept is preferable and it is this one that will often be used in practice. 5.2.3 Sensitivity parameters 5.2.3.1 ‘Greeks’ The premium is likely to vary when each of the parameters that determine the price of the option (spot price, exercise price, maturity etc.) change. The aim of this paragraph is to study the indices,7 known as ‘Greeks’, which measure the sensitivity of the premium to ﬂuctuations in some of these characteristics through the relation pt = f (St , K, τ, σR , RF ). 7 In the same way as duration and convexity, which measure the sensitivity of the value of a bond following changes in interest rates (see Chapter 4).

156

Asset and Risk Management

Here, we will restrict ourselves to examining the most commonly used sensitivity coefﬁcients: those that bring the option price and namely the underlying equity price time, volatility and risk-free rate into relation. In addition, the sign indications given are valid for a non-dividend-paying equity option. The coefﬁcient (delta) represents the sensitivity of the option price with respect to the underlying equity price. It is measured by dividing the variations in these two prices for a small increase δSt in the underlying equity price: =

f (St + δSt , K, τ, σR , RF ) − f (St , K, τ, σR , RF ) δSt

Or, more speciﬁcally: f (St + δSt , K, τ, σR , RF ) − f (St , K, τ, σR , RF ) δSt →0 δSt

= lim

= fS (St , K, τ, σR , RF ) Thus, for a call, if the underlying equity price increases by ¤1, the price of the option will increase by ¤. It will be between 0 and 1 for a call and between −1 and 0 for a put. Another coefﬁcient expresses the sensitivity of the option price with respect to the underlying equity price, but this time in the second order. This is the coefﬁcient (gamma), which is expressed by the ratio of variations in on one hand and the price St on the other hand. = fSS (St , K, τ, σR , RF ) If one wishes to compare the dependency of the option premium vis-`a-vis the underlying equity price and the price of a bond according to the actuarial rate, it can be said that is to the duration what is to convexity. This coefﬁcient , which is always positive, is the same for a call option and for a put option. The following coefﬁcient, termed (theta), measures the dependence of the option price according to time: = ft (St , K, T − t, σR , RF ) or, by introducing the residual life span τ = T − t of the contract, = −fτ (St , K, τ, σR , RF ) When the maturity date for the option contract is approaching, the value of the contract will diminish, implying that is generally negative. The coefﬁcient V (vega)8 measures the sensitivity of the option premium with respect to volatility: V = fσ (St , K, τ, σR , RF ) It is always positive and has the same value for a call and for a put. It is of course interpreted as follows: if the volatility increases by 1 %, the option price increases by V . 8

Also termed κ (kappa) on occasions – possibly because vega is not a Greek letter!

Options

157

Finally, the coefﬁcient ρ (rho) expresses the manner in which the option price depends on the risk-free rate RF : ρ = fR F (St , K, τ, σR , RF ) This coefﬁcient will be positive or negative depending on whether we are dealing with a call or a put. 5.2.3.2 ‘Delta hedging’ As these coefﬁcients have now been deﬁned, we can move onto an interesting interpretation of the delta. This element plays its part in hedging a short-term position (issue) of a call option (referred to as ‘delta hedging’). The question is: how many units of the underlying equity must the issuer of a call acquire in order to hedge his position? This quantity is referred to as X. Although the current value of the underlying equity is X, the value of its portfolio, consisting of the purchase of X units of the underlying equity and the issue of one call on that equity, is: V (S) = X · S − C(S) If the price of the underlying equity changes from S to S + δS, the value of the portfolio changes to: V (S + δS) = X · (S + δS) − C(S + δS) As ≈

C(S + δS) − C(S) , the new value of the portfolio is: δS V (S + δS) = X · (S + δS) − [C(S) + · δS] = X · S − C(S) + (X − ) · δS = V (S) + (X − ) · δS

The position will therefore be hedged against a movement (up or down) of the underlying equity price if the second term is zero (X = ), that is, if the issuer of the call holds units in the underlying equity. 5.2.4 General properties 5.2.4.1 Call–put parity relation for European options We will now draw up the relation that links a European call premium and a European put premium, both relating to the same underlying equity and both with the same exercise price and maturity date: this is termed the ‘call–put parity relation’. We will establish this relation for a European equity option that does not distribute a dividend during the option contract period. Let us consider a portfolio put together at moment t with: • the purchase of the underlying equity, whose value is St ; • the purchase of a put on this underlying equity, with exercise price K and maturity T ; its value is therefore Pt (St , K, τ, σR , RF );

158

Asset and Risk Management

• the sale of a call on the same underlying equity, with exercise price K and maturity T ; its value is therefore Ct (St , K, τ, σR , RF ); • the borrowing (at risk-free rate RF ) of a total worth K at time T ; the amount is therefore K · (1 + RF )−τ . The value of the portfolio at maturity T will be ST + PT − CT − K. As we have shown previously that CT = (ST − K)+ and that PT (K − ST )+ , this value at maturity will equal: if ST > K,

ST + 0 − (ST − K) − K = 0

if ST ≤ K,

ST + (K − ST ) − 0 − K = 0

This portfolio, regardless of changes to the value of the underlying equity between t and T and for constant K and RF , has a zero value at moment T . Because of the hypothesis of absence of arbitrage opportunity,9 the portfolio can only have a zero value at moment t. The zero value of this portfolio at moment t is expressed by: St + Pt − Ct − K · (1 + RF )−τ = 0. Or, in a more classic way, by: Ct + K · (1 + RF )−τ = Pt + St This is the relation of parity declared. Note The ‘call–put’ parity relation is not valid for an exchange option because of the interest rate spread between the two currencies. If the risk-free interest rate for the domestic currency and that of the foreign currency are referred to as RF(D) and RF(F ) (they are assumed to be constant and valid for any maturity date), it is easy to see that the parity relation will take the form Ct + K · (1 + RF(D) )−τ = Pt + St · (1 + RF(F ) )−τ . 5.2.4.2 Relation between European call and American call Let us now establish the relation that links a European call to an American call, both for the same underlying equity and with the same exercise price and maturity date. As with the parity relation, we will deal only with equity options that do not distribute a dividend during the option contract period. As the American option can be exercised at any moment prior to maturity, its value will always be at least equal to the value of the European option with the same characteristics: Ct(a) (St , K, T − t, σR , RF ) ≥ Ct(e) (St , K, T − t, σR , RF ) The parity relation allows the following to be written in succession: Ct(e) + K · (1 + RF )−τ = Pt(e) + St Ct(e) ≥ St − K · (1 + RF )−τ > St − K Ct(a) ≥ Ct(e) > (St − K)+ 9

Remember that no ﬁnancial movement has occurred between t and T as we have excluded the payment of dividends.

Options

159

As (St − K)+ represents what the American call would return if exercised at moment t, its holder will be best advised to retain it until moment T . At all times, therefore, this option will have the same value as the corresponding European option: Ct(a) = Ct(e)

∀t ∈ [0; T ]

We would point out that the identity between the American and European calls does not apply to puts or to other kinds of option (such as exchange options). 5.2.4.3 Inequalities on price The values of calls and puts obey the following inequalities: [St − K(1 + RF )−τ ]+ ≤ Ct ≤ St [K(1 + RF )−τ − St ]+ ≤ Pt(e) ≤ K(1 + RF )−τ [K − St ]+ ≤ Pt(a) ≤ K These inequalities limit the area in which the graph for the option according to the underlying equity price can be located. This leads to Figure 5.9 for a European or American call and Figure 5.10 for puts. The right-hand inequalities are obvious: they state simply that an option cannot be worth more than the gain it allows. A call cannot therefore be worth more than the underlying equity whose acquisition it allows. In the same way, a put cannot be worth more than the

Ct

K(1 + RF)–τ

St

Figure 5.9 Inequalities for a call value

P(e) t

P(a) t

K K(1 + RF)–τ

K(1 + RF)–τ

St

K

Figure 5.10 Inequalities for the value of a European put and an American put

St

160

Asset and Risk Management

exercise price K at which it allows the underlying equity to be sold; and for a European put, it cannot exceed the discounted value of the exercise price in question (the exercise can only occur on the maturity date). Let us now justify the left-hand inequality for a call. To do this, we set up at moment t a portfolio consisting of: • the purchase of one call; • a risk-free ﬁnancial investment worth K at maturity: K(1 + RF )−τ ; • the sale of one unit of the underlying equity. Its value at moment t will of course be: Vt = Ct + K(1 + RF )−τ − St . Its value on maturity will depend on the evolution of the underlying equity: if ST > K, (ST − K) + K − ST = 0 VT = if ST ≤ K, 0 + K − ST In other words, VT = (K − ST )+ , which is not negative for any of the possible evolution scenarios. In the absence of arbitrage opportunity, we also have Vt ≥ 0, that is, Ct ≥ St − K(1 + RF )−τ . As the price of the option cannot be negative, we have the inequality declared. The left-hand inequality for a European put is obtained in the same way, by arbitragebased logic using the portfolio consisting of: • the purchase of one put; • the purchase of one underlying equity unit; • the purchase of an amount worth K at maturity: K(1 + RF )−τ . The left-hand inequality for an American put arises from the inequality for a European put. It should be noted that there is no need to discount the exercise price as the moment at which the option right will be exercised is unknown.

5.3 VALUATION MODELS Before touching on the developed methods for determining the value of an option, we will show the basic principles for establishing option pricing using an example that has been deliberately simpliﬁed as much as possible. Example Consider a European call option on the US$/¤ exchange rate for which the exercise price is K = 1. Then suppose that at the present time (t = 0) the rate is S0 = 0.95 (US$1 = ¤0.95). We will be working with a zero risk-free rate (RF = 0) in order to simplify the developments. Let us suppose also that the random changes in the underlying equity between moments t = 0 and t = T can correspond to two scenarios s1 and s2 for which ST is ¤1.1 and ¤0.9 respectively, and that the scenarios occur with the respective probabilities of 0.6 and 0.4. 1.1 Pr(s1 ) = 0.6 ST = 0.9 Pr(s2 ) = 0.4

Options

161

The changes in the exchange option, which is also random, can therefore be described as: 0.1 Pr(s1 ) = 0.6 CT = 0.0 Pr(s2 ) = 0.4 Let us consider that at moment t = 0, we have a portfolio consisting of: • the issue of a US$/¤ call (at the initial price of C0 ); • a loan of ¤X; • the purchase of US$Y . so that: • The initial value V0 of the portfolio is zero: the purchase of the US$Y is made exactly with what is generated by the issue of the call and the loan. • The portfolio is risk-free, and will undergo the same evolution whatever the scenario (in fact, its value will not change as we have assumed RF to be zero). The initial value of the portfolio in ¤ is therefore V0 = −C0 − X + 0.95Y = 0. Depending on the scenario, the ﬁnal value will be given by: VT (s1 ) = −0.1 − X + 1.1 · Y VT (s2 ) = −X + 0.9 · Y The hypothesis of absence of opportunity for arbitrage allows conﬁrmation that VT (s1 ) = VT (s2 ) = 0 and the consequent deduction of the following values: X = 0.45 and Y = 0.5. On the basis of the initial value of the portfolio, the initial value of the option is therefore deduced: C0 = −X + 0.95Y = 0.025 It is important to note that this value is totally independent of the probabilities 0.6 and 0.4 associated with the two development scenarios for the underlying equity price, otherwise we would have C0 = 0.1 × 0.6 + 0 × 0.4 = 0.06. If now we determine another law of probability Pr(s1 ) = q

Pr(s2 ) = 1 − q

for which C0 = Eq (CT ), we have 0.025 = 0.01 · q + 0 · (1 − q), that is: q = 0.25. We are in fact looking at the law of probability for which S0 = Eq (ST ): Eq (ST ) = 1.1 · 0.25 + 0.9 · 0.75 = 0.95 = S0

162

Asset and Risk Management

We have therefore seen, in a very speciﬁc case where there is a need for generalisation, that the current value of the option is equal to the mathematical expectation of its future value, with respect to the law of probability for which the current value of the underlying equity is equal to the expectation of its future value.10 This law of probability is known as the risk-neutral probability. 5.3.1 Binomial model for equity options This model was produced by Cox, Ross and Rubinstein.11 In this discrete model we look simply at a list of times 0, 1, 2, . . . , T separated by a unit of time (the period), which is usually quite short. Placing ourselves in a perfect market, we envisage a European equity option that does not distribute any dividends during the contract period and with a constant volatility during the period in question. In addition, we assume that the risk-free interest does not change during this period, that it is valid for any maturity (ﬂat, constant yield curve), and that it is the same for a loan and for an investment. This interest rate, termed RF , will be expressed according to a duration equal to a period; and the same will apply for other parameters (return, volatility etc.). Remember (Section 3.4.2) that the change in the underlying equity value from one time to another is dichotomous in nature: equity has at moment t the value St , but at the next moment t + 1 will have one of the two values St · u (greater than St ) or St · d (less than St ) with respective probabilities of α and (1 − α). We have d ≤ 1 < 1 + RF ≤ u and the parameters u, d and α, which are assumed to be constant over time, should be estimated on the basis of observations. We therefore have the following graphic representation of the development in equity prices for a period: S = St · u (α) −−→ t+1 − St − −−→ St+1 = St · d (1 − α) and therefore, more generally speaking, as shown in Figure 5.11. Now let us address the issue of evaluating options at the initial moment. Our reasoning will be applied to a call option. It is known that the value of the option at the end of the contract will be expressed according to the value of the equity by CT = (ST − K)+ .

S0 • u S0 S0 • d

S0 • u2 S0 • ud S0 • d2

S0 • u3

…

2

S0 • u d … S0 • ud2 … S0 • d2

…

Figure 5.11 Binomial tree for underlying equity 10

When the risk-free rate is zero, remember. Cox J., Ross S. and Rubinstein M., Option pricing: a simpliﬁed approach, Journal of Financial Economics, No. 7, 1979, pp. 229–63 11

Options

163

After constructing the tree diagram for the equity from moment 0 to moment T , we will now construct the tree from T to 0 for the option, from each of the ends in the equity tree diagram, to reconstruct the value C0 of the option at 0. This reasoning will be applied in stages. 5.3.1.1 One period Assume that T = 1. From the equity tree diagram it can be clearly seen that the call C0 (unknown) can evolve into two values with the respective probabilities of α and (1 − α): C = C(u) = (S0 · u − K)+ −−→ 1 − C0− −−→ C1 = C(d) = (S0 · d − K)+ As the value of C1 (that is, the value of C(u) and C(d)) is known, we will now determine the value of C0 . To do this, we will construct a portfolio put together at t = 0 by: • the purchase of X underlying equities with a value of S0 ; • the sale of one call on this underlying equity, with a value of C0 . The value V0 of this portfolio, and its evolution V1 in the context described, are given by: V −−→ 1 − V0 = X · S0 − C0− −−→ V1

=

X · S0 · u − C(u)

=

X · S0 · d − C(d)

We then choose X so that the portfolio is risk-free (the two values of V1 will then be identical). The hypothesis of absence of arbitrage opportunity shows that in this case, the return on this portfolio must be given by the risk-free rate RF . We therefore obtain: V1 = X · S0 · u − C(u) = X · S0 · d − C(d) V1 = (X · S0 − C0 )(1 + RF ) The ﬁrst equation readily provides: X · S0 = and therefore: V1 =

C(u) − C(d) u−d

d · C(u) − u · C(d) u−d

The second equation then provides: d · C(u) − u · C(d) = u−d

C(u) − C(d) − C0 (1 + RF ) u−d

164

Asset and Risk Management

This easily resolves with respect to C0 : −1

C0 = (1 + RF )

u − (1 + RF ) (1 + RF ) − d C(u) + C(d) u−d u−d

The coefﬁcients for C(u) and C(d) are clearly between 0 and 1 and total 1. We therefore introduce: (1 + RF ) − d u − (1 + RF ) q= 1−q = u−d u−d They constitute the neutral risk law of probability. We therefore have the value of the original call: C0 = (1 + RF )−1 [q · C(u) + (1 − q) · C(d)] Note 1 As was noted in the introductory example, the probability of growth α is not featured in the above relation. The only law of probability involved is the one relating to the risk-neutral probability q, with respect to which C0 appears as the discounted value of the average value of the call at maturity (t = 1). The term ‘risk-neutral probability’ is based on the logic that the expected value of the underlying equity at maturity (t = 1) with respect to this law of probability is given by: Eq (S1 ) = q · S0 · u + (1 − q) · S0 · d

u − (1 + RF ) (1 + RF ) − d u+ d = S0 u−d u−d = S0 (1 + RF ) The change in the risk-free security is the same as the expected change in the risked security (for this law of probability). Note 2 When using the binomial model practically, it is simpler to apply the reasoning with respect to one single period for each node on the tree diagram, progressing from T to 0. We will, however, push this analysis further in order to obtain a general result. 5.3.1.2 Two periods Let us now suppose that T = 2. The binomial tree diagram for the option will now be written as: 2 + −→C2 = C(u, u) = (S0 · u − K) − − − C = C(u) − 1 −→ −−→ − C2 = C(u, d) = C(d, u) = (S0 · ud − K)+ C0− −−→ → − − C1 = C(d) − − − −→ C2 = C(d, d) = (S0 · d 2 − K)+

Options

165

The previous reasoning will allow transition from time 2 to time 1: C(u) = (1 + RF )−1 [q · C(u, u) + (1 − q) · C(u, d)] C(d) = (1 + RF )−1 [q · C(d, u) + (1 − q) · C(d, d)] And from time 1 to time 0: C0 = (1 + RF )−1 q · C(u) + (1 − q) · C(d) = (1 + RF )−2 q 2 · C(u, u) + 2q(1 − q) · C(u, d) + (1 − q)2 · C(d, d) Consideration of the coefﬁcients for C(u,u), C(u,d) and C(d, d) allows the above note to be speciﬁed: C0 is the discounted value for the expected value of the call on maturity (t = 2) with respect to a binomial law of probability12 for parameters (2; q). 5.3.1.3 T periods To generalise what has already been said, it is seen that C0 is the discounted value of the expected value of the call on maturity (t = T ) with respect to a binomial law of probability for parameters (T ; q). We can therefore write: C0 = (1 + RF )−T

T T j =0

= (1 + RF )−T

T j =0

j

T j

q j (1 − q)T −j C(u, . . . , u, d, . . . , d )

j

T −j

q j (1 − q)T −j (S0 uj d T −j − K)+

As uj d T −j is an increasing function of j , if one introduces j = min {j :S0 uj d T −j − K > 0}, ln K − ln(S0 d T ) , the evaluation of that is, the smallest value of j that is strictly higher than ln u − ln d the call takes the form: T T −T q j (1 − q)T −j (S0 uj d T −j − K) C0 = (1 + RF ) j j =J

= S0

T j =J

T j

uq 1 + RF

j

d(1 − q) 1 + RF

T −j

−T

− K(1 + RF )

T T j =J

j

q j (1 − q)T −j

Because: d(1 − q) u[(1 + RF ) − d] + d[u − (1 + RF )] uq + = =1 1 + RF 1 + RF (1 + RF )(u − d) we introduce: q = 12

See Appendix 2.

uq 1 + RF

1 − q =

d(1 − q) 1 + RF

166

Asset and Risk Management

By introducing the notation B(n; p) for a binomial random variable with parameters (n, p), we can therefore write: C0 = S0 · Pr B(T ; q ) ≥ J − K(1 + RF )−T · Pr B(T ; q) ≥ J The ‘call–put’ parity relation C0 + K(1 + RF )−T = P0 + S0 allows the evaluation formula to be obtained immediately for the put with the same characteristics: P0 = −S0 · Pr[B(T ; q ) < J ] + K(1 + RF )−T · Pr[B(T ; q) < J ] Note The parameters u and d are determined, for example, on the basis of the volatility σR of the return of the underlying equity. In fact, as the return relative to a period takes the values (u − 1) or (d − 1) with the respective probabilities α and (1 − α), we have: ER = α(u − 1) + (1 − α)(d − 1) σR2 = α(u − 1)2 + (1 − α)(d − 1)2 − [α(u − 1) + (1 − α)(d − 1)]2 = α(1 − α)(u − d)2 By choosing α = 1/2, we arrive at u − d = 2σR . Cox, Ross and Rubinstein suggest taking d = 1/u, which leads to an easily solved second-degree equation or, with a Talor approximation, u = eσR and d = e−σR . Example Let us consider a call option of seven months’ duration, relating to an equity with a current value of ¤100 and an exercise price of ¤110. It is assumed that its volatility is σR = 0.25, calculated on an annual basis, and that the risk-free rate is 4 % per annum. We will assess the value of this call at t = 0 by constructing a binomial tree diagram with the month as the basic period. The equivalent volatility and risk-free rate as given by:

1 · 0.25 = 0.07219 12 √ 12 RF = 1.04 − 1 = 0.003274 σR =

We therefore have u − 1/u = 0.1443, for which the only positive root is13 u = 1.07477 (and therefore d = 0.93043). The risk-neutral probability is: q=

1.003274 − 0.93043 = 0.5047 1.07477 − 0.93043

13 If we had chosen α = 1/3 instead of 1/2, we would have found that u = 1.0795, that is, a relatively small difference; the estimation of u therefore only depends relatively little on α.

Options

167

Let us ﬁrst show the practical method of working: the construction of two binomial tree diagrams (forward for the equity and backward for the bond). For example, we have for the two values of S1 : S0 · u = 100 · 1.07477 = 107.477 S0 · d = 100 · 0.93043 = 93.043 The binomial tree for the underlying equity is shown in Table 5.3. The binomial tree diagram for the option is constructed backwards. The last column is therefore constructed on the basis of the relation CT = (ST − K)+ . The ﬁrst component of this column is max (165.656 − 110; 0) = 55.656, and the elements in the preceding columns can be deduced from it, for example: 1 [0.5047 · 55.656 + 0.4953 · 33.409] = 44.491 1.003274 This gives us Table 5.4. The initial value of the call is therefore C0 = ¤4.657. Let us now show the calculation of the value of the option based on the ﬁnal formula. The auxiliary probability is given by: q =

Table 5.3 0 100

1.07477 · 0.5047 = 0.5406 1.003274

Binomial tree for underlying equity 1

2

3

4

5

6

7

107.477 93.043

115.513 100.000 86.570

124.150 107.477 93.043 80.548

133.432 115.513 100.000 86.570 74.944

143.409 124.150 107.477 93.043 80.548 69.731

154.132 133.432 115.513 100.000 86.570 74.944 64.880

165.656 143.409 124.150 107.477 93.043 80.548 69.731 60.366

Table 5.4 0 4.657

Binomial tree for option 1

2

3

4

5

6

7

7.401 1.891

11.462 3.312 0.456

17.196 5.696 0.906 0

24.809 9.555 1.801 0 0

34.126 15.482 3.580 0 0 0

44.491 23.791 7.118 0 0 0 0

55.656 33.409 14.150 0 0 0 0 0

168

Asset and Risk Management

In addition, as calculate:

ln 110 − ln(100 · d 7 ) = 4.1609, we ﬁnd that J = 5. This will allow us to ln u − ln d

7 7 7 5 2 6 p (1 − p) + p7 p (1 − p) + Pr[B(7; p) ≥ 5] = 6 7 5 = p 5 (21 − 35p + 15p 2 ) and therefore: Pr[B(7; q) ≥ 5] = 0.2343 and Pr[B(7; q ) ≥ 5] = 0.2984. The price of the call therefore equals: C0 = 100 · 0.2984 − 110 · (1 + RF )−7 · 0.2343 = 4.657. Meanwhile, the premium for the put with the same characteristics is: P0 = 100 · (1 − 0.2984) + 100 · (1 + RF )−7 · (1 − 0.2343) = 12.168 Note that it is logical for the price of the put to be higher than that of the call, as the option is currently ‘out of the money’. 5.3.1.4 Taking account of dividends We have assumed until now that the underlying equity does not pay a dividend. Let us now examine a case in which dividends are paid. If only one dividend is paid during the i th period (interval [i − 1; i]), and the rate of the dividend is termed δ (ratio of the dividend amount to the value of the security), the value of the security will be reduced to the rate δ when the dividend is paid and the binomial tree diagram for the underlying equity must therefore be modiﬁed as follows: • up to the time (i − 1), no change: the values carried by the nodes in the tree diagram for the period j ≤ i − 1 will be S0 uk d j −k (k = 0, . . . , j ); • from the time i onwards (let us say for j ≥ i), the values become14 S0 (1 − δ)uk d j −k (k = 0, . . . , j ); • the tree diagram for the option is constructed in the classic backward style from that point; • if several dividends are paid at various times during the option contract, the procedure described above must be applied whenever a payment is made. 5.3.2 Black and Scholes model for equity options We now develop the well-known continuous time model compiled by Black and Scholes.15 In this model the option, concluded at moment 0 and maturing at moment T , can be evaluated at any moment t ∈ [0; T ], and as usual, we note τ = T − t. We further assume that the risk-free rate of interest does not change during this period, that it is valid for any maturity date (ﬂat and constant yield curve) and that it is the same 14 This means that when the tree diagram is constructed purely numerically, taking account of the factor (1 − δ) will only be effective for the passage from the time i − 1 to the time i. 15 Black F. and Scholes M., The pricing of options and corporate liabilities, Journal of Political Economy, Vol. 81, 1973, pp. 637–59.

Options

169

for an investment as for a loan. The annual rate of interest, termed RF up until now, is replaced in this continuous model by the corresponding instant rate r = ln(1 + RF ), so that a unitary total invested during a period of t years becomes (1 + RF )t = ert . Remember (see Section 3.4.2) that the evolution of the underlying equity value is governed by the stochastic differential equation dSt = ER · dt + σR · dwt St We will initially establish16 the Black and Scholes formula for a call option the value of which is considered to be a function of the value St of the underlying equity and of time t, the other parameters being considered to be constant: Ct = C(St , t). By applying Itˆo’s formula to the function C(St , t), we obtain: σ2 · dt + σR St CS · dwt dC(St , t) = Ct + ER St CS + R St2 CSS 2 Let us now put together a portfolio that at moment t consists of: • the purchase of X underlying equities with a value of St ; • the sale of one call on then underlying equity, with value C(St , t). The value Vt of this portfolio is given by Vt = X · St − C(St , t). This, by differentiation, gives: dVt = X · [ER St · dt + σR St · dwt ] −

Ct + ER St CS +

σR2 2 St CSS · dt + σR St CS · dwt 2

σ2 · dt + [X · σR St − σR St CS ] · dwt = X · ER St − Ct + ER St CS + R St2 CSS 2

We then choose X so that the portfolio no longer has any random components (the coefﬁcient of dwt in the preceding relation must be zero). The hypothesis of absence of arbitrage opportunity shows that if possible, the return on the portfolio should be given by the risk-free rate r: dVt = r · dt + 0 · dwt Vt We therefore arrive at: σR2 2 S − C + E S C + C S X · E R t R t S t 2 t SS =r X · St − C(St , t) X · σR St − σR St CS =0 X · St − C(St , t) 16 We will only develop the ﬁnancial part of the logic, as the end of the demonstration is purely analytical. Readers interested ´ in details of calculations can consult the original literature or Devolder P., Finance Stochastique, Editions de l’ULB, 1993.

170

Asset and Risk Management

or, in the same way: σR2 2 X · (ER − r)St − Ct + ER St CS + S C − rC(St , t) = 0 2 t SS X − CS = 0 The second equation provides the value of X, which cancels out the random component of the portfolio: X = C S . By making a substitution in the ﬁrst equation, we ﬁnd: (ER − r)St ·

CS

−

Ct

+

ER St CS

In other words: Ct + rSt CS +

σR2 2 + S C − rC(St , t) = 0 2 t SS

σR2 2 S C − rC(St , t) = 0 2 t SS

In this equation, the instant mean return ER has disappeared.17 We are looking at a partial derivative equation (in which none of the elements are now random) of the second order for the unknown function C(St , t). It allows a single solution if two limit conditions are imposed: C(0, t) = 0 C(ST , T ) = (ST − K)+ Through a change in variables, this equation can be turned into an equation well known to physicists: the heat equation.18 It is in fact easy, although demanding, to see that if the new unknown function u(x, s) = C(St , t)ert is introduced where the change of variables occurs: σ 2 (x − s) R = K · exp S t 2 σ 2 r− R 2 2 sσ R t =T − 2 σR2 2 r− 2 which inverts to:

σR2 σR2 St 2 x = σ 2 r − 2 · ln K + r − 2 τ R 2 σR2 2 τ r − s = 2 σR2 17 In the same way as the independence of the result obtained by the binomial model with respect to the probability α governing the evolution of the subjacent share price was noted. 18 See for example: Krasnov M., Kisilev A., Makarenko G. and Chikin E., Math´ematique sup´erieures pour ing´enieurs et polytechniciens, De Boeck, 1993. Also: Sokolnikov I. S. and Redheffer R. M., Mathematics of Physics and Modern Engineering, McGraw-Hill, 1966.

Options

The equation obtained turns into: uxx = us . With the conditions limit: lim u(x, s) = 0 x→−∞ σR2 x 2 K · exp u(x, 0) = v(x) = σR2 r − 2 0

171

− 1

if x ≥ 0 if x < 0

this heat equation has the solution: 1 u(x, s) = √ 2 πs

"

+∞ −∞

v(y)e−(x−y)

2

/4s

dy

By making the calculations with the speciﬁc expression of v(y), and then making the inverse change of variables, we obtain the Black and Scholes formula for the call option C(St , t) = St (d1 ) − Ke−rτ (d2 ), where we have: d1 d2

#

ln =

St σ2 + r± R τ K 2 √ σR τ

and the function represents the standard normal distribution function: 1 (t) = √ 2π

"t

e−x

2

/2

dx

−∞

The price Pt of a put option can be evaluated on the basis of the price of the call option, thanks to the relation of ‘call–put’ parity: Ct + K · e−rT = Pt + St . In fact: P (St , t) = C(St , t) + Ke−rτ − St = St (d1 ) − Ke−rτ (d2 ) + Ke−rτ − St = −St [1 − (d1 )] + Ke−rτ [1 − (d2 )] and therefore:

P (St , t) = −St (−d1 ) + Ke−rτ (−d2 )

because: 1 − (t) = (−t) Example Consider an option with the same characteristics as in Section 5.3.1: S0 = 100, K = 110, t = 0, T = 7 months, σR = 0.25 on an annual basis and RF = 4 % per year.

172

Asset and Risk Management

We are working with the year as the time basis, so that: τ = 7/12, r = ln 1.04 = 0.03922.

d1 d2

#

ln =

0.252 7 100 + 0.03922 ± · 110 2 12 −0.2839 = −0.4748 7 0.25 · 12

Hence (d1 ) = 0.3823 and (d2 ) = 0.3175. This allows the price of the call to be calculated: 7

C = C(S0 , 0) = 100 · (d1 ) − 110 · e−0.03922· 12 · (d2 ) = 4.695 As the put premium has the same characteristics, it totals: 7

P = P (S0 , 0) = −100 · [1 − (d1 )] + 110 · e−0.03922· 12 · [1 − (d2 )] = 12.207 The similarity of these ﬁgures to the values obtained using the binomial model (4.657 and 12.168 respectively) will be noted.

5.3.2.2 Sensitivity parameters When the price of an option is calculated using the Black and Scholes formula, the sensitivity parameters or ‘Greeks’ take on a practical form. Let us examine ﬁrst the case of a call option delta. If the reduced normal density is termed φ 1 2 φ(x) = (x) = √ e−x /2 2π we arrive, by derivation, at: (C) = CS = (d1 ) +

1

√

St σR τ

St φ(d1 ) − Ke−rτ φ(d2 )

It is easy to see that the quantity between the square brackets is zero and that therefore (C) = (d1 ), and that by following a very similar logic, we will arrive at a put of: (P ) = (d1 ) − 1. The above formula provides a very simple means of determining the number of equities that should be held by a call issuer to hedge his risk (the delta hedging). This is a common use of the Black and Scholes relation: the price of an option is determined by the law of supply and demand and its ‘inversion’ provides the implied volatility. The latter is therefore used in the relation (C) = (d1 ), which is then known as the hedging formula.

Options

173

The other sensitivity parameters (gamma, theta, vega and rho) are obtained in a similar way: φ(d1 ) (C) = (P ) = √ St σR τ St σR φ(d1 ) − rKe−rτ (d2 ) (C) = − 2√τ S σ φ(d ) (P ) = − t R√ 1 + rKe−rτ (−d2 ) 2 τ V (C) = V (P ) = τ St φ(d1 ) ρ(C) = τ Ke−rτ (d2 ) ρ(C) = −τ Ke−rτ (−d2 ) In ﬁnishing, let us mention a relationship that links the delta, gamma and theta parameters. The partial derivative equation obtained during the demonstration of the Black and Scholes formula for a call is also valid for a put (the price therefore being referred to as p without being speciﬁed): σ2 pt + rSt pS + R St2 pSS − rp(St , t) = 0 2 This, using the sensitivity parameters, will give: + rSt +

σR2 2 S =r ·p 2 t

5.3.2.3 Taking account of dividends If a continuous rate dividend19 δ is paid between t and T and the underlying equity is worth St (resp. ST ) at the moment t (resp. T ), it can be said that had it not paid a dividend, it would have passed from value St to value eδτ ST . It can also be said that the same equity without dividend would pass from value e−δτ St at moment t to value ST at moment T . In order to take account of the dividend, therefore, it will sufﬁce within the Black and Scholes formula to replace St by e−δτ St , thus giving: C(St , t) = St e−δτ (d1 ) − Ke−rτ (d2 ) P (St , t) = −St e−δτ (−d1 ) + Ke−rτ (−d2 ). where, we have:

19

σR2 St # ln + r − δ ± τ K 2 d1 = √ d2 σR τ

An discounting/capitalisation factor of the exponential type is used here and throughout this paragraph.

174

Asset and Risk Management

5.3.3 Other models of valuation 5.3.3.1 Options on bonds It is not enough to apply the methods shown above (binomial tree diagram or Black and Scholes formula) to options on bonds. In fact: • Account must be taken of coupons regularly paid. • The constancy of the underlying equity volatility (a valid hypothesis for equities) does not apply in the case of bonds as their values on maturity converge towards the repayment value R. The binomial model can be adapted to suit this situation, but is not an obvious generalisation of the method set out above.20 Adapting the Black and Scholes model consists of replacing the geometric Brownian motion that represents the changes in the value of the equity with a stochastic process that governs the changes in interest rates, such as those used as the basic for the Vasicek and Cox, Ingersoll and Ross models (see Section 4.5). Unfortunately, the partial derivatives equation deduced therefrom does not generally allow an analytical solution and numeric solutions therefore have to be used.21 5.3.3.2 Exchange options For an exchange option, two risk-free rates have to be taken into consideration: one relative to the domestic currency and one relative to the foreign currency. For the discrete model, these two rates are referred to respectively as RF(D) and RF(F ) . By altering the logic of Section 5.3.1 using this generalisation, it is possible to determine the price of an exchange option using the binomial tree diagram technique. It will be seen that the principle set out above remains valid with a slight alteration of the close formulae: C0 is the discounted expected value of the call on maturity (for a period): C0 = (1 + RF(D) )−1 q · C(u) + (1 − q) · C(d) with the neutral risk probability: % $ 1 + (RF(D) − RF(F ) ) − d q= u−d

1−q =

$ % u − 1 + (RF(D) − RF(F ) ) u−d

For the continuous model, the interest rates in the domestic and foreign currencies are referred to respectively as r (D) and r (F ) . Following a logic similar to that accepted for options on dividend-paying equities, we see that the Black and Scholes formula is still (F ) valid provided the underlying equity price St is replaced by St e−r τ , which gives the formulae: (F ) (D) C(St , t) = St e−r τ (d1 ) − Ke−r τ (d2 ) P (St , t) = −St e−r 20

(F )

τ

(−d1 ) + Ke−r

(D)

τ

(−d2 )

Read for example Copeland T. E. and Weston J. F., Financial Theory and Corporate Policy, Addison-Wesley, 1988. See for example Cortadon G., The pricing of options on default-free bonds, Journal of Financial and Quantitative Analysis, Vol. 17, 1982, pp. 75–100. 21

Options

where, we have: d1 d2

#

ln =

175

St σ2 + (r (D) − r (F ) ) ± R τ K 2 √ σR τ

This is known as the Garman–Kohlhagen formula.22

5.4 STRATEGIES ON OPTIONS23 5.4.1 Simple strategies 5.4.1.1 Pure speculation As we saw in Section 5.1, the asymmetrical payoff structure particular to options allows investors who hold them in isolation to proﬁt from the fall in the underlying equity price while limiting the loss (on the reduced premium) that occurs when a contrary variation occurs. The issue of a call/put option, on the other hand, is a much more speculative operation. The proﬁt will be limited to the premium if the underlying equity price remains lower/higher than the exercise price, while considerable losses may arise if the price of the underlying equity rises/falls more sharply. This type of operation should therefore only be envisaged if the issuer is completely conﬁdent that the price of the underlying equity will fall/rise. 5.4.1.2 Simultaneous holding of put option and underlying equity As the purchase of a put option allows one to proﬁt from a fall in the underlying equity price, it seems natural to link this fall to the holding of the underlying equity, in order to limit the loss inﬂicted by the fall in the price of the equity held alone. 5.4.1.3 Issue of a call option with simultaneous holding of underlying equity We have also seen (Example 1 in Section 5.1.2) that it is worthwhile issuing a call option while holding the underlying equity at the same time. In fact, when the underlying equity price falls (or rises slightly), the loss incurred thereon is partly compensated by encashment of the premium, whereas when the price rises steeply, the proﬁt that would have been realised on the underlying equity is limited to the price of the option increased by the difference between the exercise price and the underlying equity price at the beginning of the contract. 5.4.2 More complex strategies Combining options allows the creation of payoff distributions that do not exist for classic assets such as equities or bonds. These strategies are usually used by investors trying to turn very speciﬁc forecasts to proﬁt. We will look brieﬂy at the following: • straddles; • strangles; 22 Garman M. and Kohlhagen S., Foreign currency option values, Journal of International Money and Finance, No. 2, 1983, pp. 231–7. 23 Our writings are based on Reilly F. K. and Brown K. C., Investment Analysis and Portfolio Management, SouthWestern, 2000.

176

Asset and Risk Management

• spreads; • range forwards. 5.4.2.1 Straddles A straddle consists of simultaneously purchasing (resp. selling) a call option and a put option with identical underlying equity, exercise price and maturity date. The concomitant call (resp. put) corresponds to a long (resp. short) straddle. Clearly it is a question of playing volatility, as in essence, it is contradictory to play the rise and the fall in the underlying equity price at the same time. We saw in Section 5.2.3 (The Greeks: vega) that the premium of an option increases along with volatility. As a result, the short straddle (resp. long straddle) is the action of an investor who believes that the underlying equity price will vary more (resp. less) than historically regardless of direction of variation. It is particularly worth mentioning that with the short straddle, it is possible to make money with a zero variation in underlying equity price. Finally, note that the straddle (Figure 5.12) is a particular type of option known as the chooser option.24 5.4.2.2 Strangles The strangle is a straddle except for the exercise price, which is not identical for the call option and the put option, the options being ‘out of the money’. As a result: • The premium is lower. • The expected variation must be greater than that associated with the straddle. Certainly, this type of strategy presents a less aggressive risk-return proﬁle in comparison with the straddle. A comparison is shown in Figure 5.13. 5.4.2.3 Spreads Option spreads consist of the concomitant purchases of two contracts that are identical but for just one of their characteristics:

Profit Long straddle p K S

−p Short straddle

Figure 5.12 Long straddle and short straddle 24 Reilly F. K. and Brown K. C., suggest reading Rubinstein M., Options for the Undecided, in From Black–Scholes to black holes, Risk Magazine, 1992.

Options

177

Profit Long straddle

Long strangle

S Put exercise price

Call exercise price

Figure 5.13 Long strangle compared with long straddle Profit Call 1 exercise price

S Call 2 exercise price

Figure 5.14 Bull money spread

• The money spread consists of the simultaneous sale of an out of the money call option and the purchase of the same option in the money. The term bull money spread (resp. bear money spread ) is used to describe a money spread combination that gains when the underlying equity price rises (resp. falls) (see Figure 5.14). The term butterﬂy money spread is used to deﬁne a combination of bear and bull money spreads with hedging (limitation) for potential losses (and, obviously, reduced opportunities for proﬁt). • The calendar spread consists of the simultaneous sale and purchase of call or put options with identical exercise prices but different maturity dates. Spreads are used when a contract appears to have an aberrant value in comparison with another contract. 5.4.2.4 Range forwards For memory, range forwards consist of a combination of two optional positions. This combination is used for hedging, mainly for options on exchange rates.

Part III General Theory of VaR

Introduction 6 Theory of VaR 7 VaR estimation techniques 8 Setting up a VaR methodology

180

Asset and Risk Management

Introduction As we saw in Part II, the sheer variety of products available on the markets, linear and otherwise, together with derivatives and underlying products, implies a priori a multifaceted understanding of risk, which by nature is difﬁcult to harmonise. Ideally, therefore, we should identify a single risk indicator that estimates the loss likely to be suffered by the investor with the level of probability of that loss arising. This indicator is VaR. There are three classic techniques for estimating VaR: 1. The estimated variance–covariance matrix method. 2. The Monte Carlo simulation method. 3. The historical simulation method. An in-depth analysis of each of these methods will show their strong and weak points from both a theoretical and a practical viewpoint. We will now show, in detail, how VaR can be calculated using the historical simulation method. This method is the subject of the following chapter as well as a ﬁle on the accompanying CD-ROM entitled ‘Ch 8’, which contains the Excel spreadsheets relating to these calculations.

6 Theory of VaR 6.1 THE CONCEPT OF ‘RISK PER SHARE’ 6.1.1 Standard measurement of risk linked to ﬁnancial products The various methods for measuring risks associated with an equity or portfolio of equities have been studied in Chapter 3. Two types of measurement can be deﬁned: the intrinsic method and the relative method. The intrinsic method is the variance (or similarly, the standard deviation) in the return of the equity. In the case of a portfolio, we have to deal not only with variances but also with correlations (or covariances) two by two. They are evaluated practically by their ergodic estimator, that is, on the basis of historical observations (see Section 3.1). The relative method takes account of the risk associated with the equity or portfolio of equities on the basis of how it depends upon market behaviour. The market is represented by a stock-exchange index (which may be a sector index). This dependence is measured using the beta for the equity or portfolio and gives rise to the CAPM type of valuation model (see Section 3.3). The risk measurement methods for the other two products studied (bonds and options) fall into this second group. Among the risks associated with a bond or portfolio of bonds, those that are linked to interest-rate ﬂuctuations can be expressed as models. In this way (see Section 4.1) we see the behaviour of the two components of the risk posed by selling the bond during its lifetime and reinvesting the coupons, according to the time that elapses between the issue of the security and its repayment. If we wish to summarise this behaviour in a simple index, we have to consider the duration of the bond; as we are looking in this context at a ﬁrst-level approximation, a second measurement, that of convexity (see Section 4.2) will deﬁne the duration more precisely. Finally, the value of an option depends on a number of variables: underlying equity price, exercise price, maturity, volatility, risk-free rate.1 The most important driver is of course the underlying equity price, and for this reason two parameters, one of the ﬁrst order (delta) and another of the second order (gamma), are associated with it. The way in which the option price depends on the other variables gives rise to other sensitivity parameters. These indicators are known as ‘the Greeks’ (see Section 5.2).

6.1.2 Problems with these approaches to risk The ways of measuring the risks associated with these products or a portfolio of them, whatever they may make to the management of these assets, bring features with them that do not allow for immediate generalisation. 1

Possibly in two currencies if an exchange option is involved.

182

Asset and Risk Management

1. The representation of the risk associated with an equity through the variance in its returns (or through its square root, the standard deviation), or of the risk associated with an option through its volatility, takes account of both good and bad risks. A signiﬁcant variance corresponds to the possibility of seeing returns vastly different from the expected return, i.e. very small values (small proﬁts and even losses) as well as very large values (signiﬁcant proﬁts). This method does not present many inconveniences in portfolio theory (see Section 3.2), in which equities or portfolios with signiﬁcant variances are volatile elements, little appreciated by investors who prefer ‘certainty’ of return with low risk of loss and low likelihood of signiﬁcant proﬁt. It is no less true to say that in the context of risk management, it is the downside risk that needs to be taken into consideration. Another parameter must therefore be used to measure this risk. 2. The approach to the risks associated with equities in Markowitz’s theory limits the description of a distribution to two parameters: a measure of return and a measure of deviation. It is evident that an inﬁnite number of probability laws correspond to any one expected return–variance pairing. We are, in fact, looking at skewed distributions: Figure 6.1 shows two distributions that have the same expectation and the same variance, but differ considerably in their skewness. In the same way, distributions with the same expectation, variance and skewness coefﬁcient γ1 may show different levels of kurtosis, as shown in Figure 6.2. The distributions with higher peaks towards the middle and with fatter tails than a normal distribution2 (and therefore less signiﬁcant for intermediate values) are described as leptokurtic and characterised by a positive kurtosis coefﬁcient γ2 (for the f(x)

x

Figure 6.1 Skewness of distributions

√3 3 √6 6 –√ 3 – √6 2

Figure 6.2 Kurtosis of distributions 2

The deﬁnition of this law is given in Point (3) below.

√6 2

3 √

Theory of VaR

183

distributions in Figure 6.2, this coefﬁcient totals −0.6 for the triangular and −1.2 for the rectangular). Remember that this variance of expected returns approach is sometimes justiﬁed through utility theory. In fact, when the utility function is quadratic, the expected utility of the return on the portfolio is expressed solely from the single-pair expectation variance (see Section 3.2.7). 3. In order to justify the mean-variance approach, the equity portfolio theory deliberately postulates that the return follows a normal probability law, which is characterised speciﬁcally by the two parameters in question; if µ and σ respectively indicate the mean and the standard deviation for a normal random variable, this variable will have the density of: 1 1 x−µ 2 f (x) = √ exp − 2 σ 2πσ This is a symmetrical distribution, very important in probability theory and found everywhere in statistics because of the central limit theorem. The graph for this density is shown in Figure 6.3. A series of studies shows that normality of return of equities is a hypothesis that can be accepted, at least in an initial approximation, provided the period over which the return is calculated is not too short. It is admitted that weekly and monthly returns do not diverge too far from a normal law, but daily returns tend to diverge and follow a leptokurtic distribution instead.3 If one wishes to take account of the skewness and the leptokurticity of the distribution of returns, one solution is to replace the normal distribution with a distribution that depends on more parameters, such as the Pearson distribution system,4 and to estimate the parameters so that µ, σ 2 , γ1 and γ2 correspond to the observations. Nevertheless, the choice of distribution involved remains wholly arbitrary. Finally, for returns on securities other than equities, and for other elements involved in risk management, the normality hypothesis is clearly lacking and we do not therefore need to construct a more general risk measurement index. 4. Another problem, by no means insigniﬁcant, is that concepts such as duration and convexity of bonds, variances of returns on equities, or the delta, gamma, rho or theta f (x)

x

Figure 6.3 Normal distribution 3 4

We will deal again with the effects of kurtosis on risk evaluation in Section 6.2.2. Johnson N. L. and Kotz S., Continuous Univariate Distributions, John Wiley and Sons, Ltd, 1970.

184

Asset and Risk Management

option parameters do not, despite their usefulness, actually ‘say’ very much as risk measurement indices. In fact, they do not state the kind of loss that one is likely to suffer, or the probability of it occurring. At the very most, the loss–probability pairing will be calculated on the basis of variance in a case of normal distribution (see Section 6.2.2). 5. In Section 6.1.1 we set out a number of classical risk analysis models associated with three types of ﬁnancial products: bonds, equities and options. These are speciﬁc models adapted to speciﬁc products. In order to take account of less ‘classical’ assets (such as certain sophisticated derivatives), we will have to construct as many adapted models as are necessary and take account in those models of exchange-rate risks, which cannot be avoided on international markets. Building this kind of structure is a mammoth task, and the complexity lies not only in building the various blocks that make up the structure but also in assembling these blocks into a coherent whole. A new technique, which combines the various aspects of market risk analysis into a uniﬁed whole, therefore needs to be elaborated. 6.1.3 Generalising the concept of ‘risk’ The market risk is the risk with which the investor is confronted because of his lack of knowledge of future changes in basic market variables such as security rates, interest rates, exchange rates etc. These variables, also known as risk factors, determine the price of securities, conditional assets, portfolios etc. If the price of an asset is expressed as p and the risk factors that explain the price as X1 , X2 , . . . , Xn , we have the wholly general relation p = f (X1 , X2 , . . . , Xn ) + ε, in which the residue ε corresponds to the difference between reality (the effective price p) and the valuation model (the function f ). If the price valuation model is a linear model (as for equities), the risk factors combine, through the central limit theorem, to give a distribution of the variable p that is normal (at least in an rough approximation) and is therefore deﬁned only by the two expectation–variance parameters. On the other hand, for some types of security such as options, the valuation model ceases to be linear. The above logic is no longer applicable and its conclusions cease to be valid. We would point out that alongside the risk factors that we have just mentioned, the following can be added as factors in market risk: • the imperfect nature of valuation models; • imperfect knowledge of the rules and limits particular to the institution; • the impossibility of anticipating regulatory and legislative changes. Note As well as market risk, investors are confronted with other types of risk that correspond to the occurrence of exceptional events such as wars, oil crises etc. This group of risks cannot of course be estimated using techniques designed for market risk. The techniques shown in this Part III do not therefore deal with these ‘event-related’ risks. This should not, however, prevent the wise risk manager from analysing his positions using value at risk theory, or from using ‘catastrophe scenarios’, in an effort to understand this type of exceptional risk.

Theory of VaR

185

6.2 VaR FOR A SINGLE ASSET 6.2.1 Value at Risk In view of what has been set out in the previous paragraph, an index that allows estimation of the market risks facing an investor should: • • • •

be independent of any distributional hypothesis; concern only downside risk, namely the risk of loss; measure the loss in question in a certain way; be valid for all types of assets and therefore either involve the various valuation models or be independent of these models.

Let us therefore consider an asset the price5 of which is expressed as pt at moment t. The variation observed for the asset in the period [s; t] is expressed as ps,t and is therefore deﬁned as ps,t = pt − ps . Note that if ps,t is positive, we have a proﬁt; a negative value, conversely, indicates a loss. The only hypothesis formulated is that the value of the asset evolves in a stationary manner; the random variable ps,t has a probability law that only depends on the interval in which it is calculated through the duration (t − s) of that interval. The interval [s; t] is thus replaced by the interval [0; t − s] and the variable p will now only have the duration of the interval as its index. We therefore have the following deﬁnitive deﬁnition: pt = pt − p0 . The ‘value at risk’ of the asset in question for the duration t and the probability level q is deﬁned as an amount termed VaR, so that the variation pt observed for the asset during the interval [0; t] will only be less than VaR with a probability of (1 − q): Pr[pt ≤ VaR] = 1 − q Or similarly: Pr[pt > VaR] = q By expressing as Fp and fp respectively the distribution function and density function of the random variable pt , we arrive at the deﬁnition of VaR in Figures 6.4 and 6.5.

F∆p(x) 1

1–q VaR

x

Figure 6.4 Deﬁnition of VaR based on distribution function 5 In this chapter, the theory is presented on the basis of the value, the price of assets, portfolios etc. The same developments can be made on the basis of returns on these elements. The following two chapters will show how this second approach is the one that is adopted in practice.

186

Asset and Risk Management f∆p(x)

1–q x

VaR

Figure 6.5 Deﬁnition of VaR based on density function

It is evident that two parameters are involved in deﬁning the concept of VaR: duration t and probability q. In practice, it is decided to ﬁx t once for everything (one day or one week, for example), and VaR will be calculated as a function of q and expressed VaR q if there is a risk of confusion. It is in fact possible to calculate VaR for several different values of q. Example If VaR at 98 % equals −500 000, this means that there are 98 possibilities out of 100 of the maximum loss for the asset in question never exceeding 500 000 for the period in question. Note 1 As we will see in Chapter 7, some methods of estimating VaR are based on a distribution of value variation that does not have a density. For these random variables, as for the discrete values, the deﬁnition that we have just given is lacking in precision. Thus, when 1 − q corresponds to a jump in the distribution function, no suitable value for the loss can be given and the deﬁnition will be adapted as shown in Figure 6.6. In the same way, when q corresponds to a plateau in the distribution function, an inﬁnite number of values will be suitable; the least favourable of these values, that is the smallest, is chosen as a safety measure, as can be seen in Figure 6.7. In order to take account of this note, the very strict deﬁnition of VaR will take the following form: VaR q = min {V : Pr[pt ≤ V ] ≥ 1 − q}

F∆p(x)

1–q VaR

Figure 6.6 Case involving jump

x

Theory of VaR

187

F∆p(x)

1–q VaR

x

Figure 6.7 Case involving plateau

Table 6.1

Probability distribution of loss

p

Pr

−5 −4 −3 −2 −1 0 1 2 3 4

0.05 0.05 0.05 0.10 0.15 0.10 0.20 0.15 0.10 0.05

Example Table 6.1 shows the probability law for the variation in value. For this distribution, we have VaR 0.90 = −4 and VaR 0.95 = −5. Note 2 Clearly VaR is neither the loss that should be expected nor the maximum loss that is likely to be incurred, but is instead a level of loss that will only be exceeded with a level of probability ﬁxed a priori. It is a parameter that is calculated on the basis of the probability law for the variable (‘variation in value’) and therefore includes all the parameters for that distribution. VaR is not therefore suitable for drawing up a classiﬁcation of securities because, as we have seen for equities, the comparison of various assets is based on the simultaneous consideration of two parameters: the expected return (or loss) and a measure of dispersion of the said return. Note 3 On the other hand, it is essential to be fully aware when deﬁning VaR of the duration on the basis of which this parameter is evaluated. The parameter, calculated for several different portfolios or departments within an institution, is only comparable if the reference period is the same. The same applies if VaR is being used as a comparison index for two or more institutions.

188

Asset and Risk Management

Note 4 Sometimes a different deﬁnition of VaR is found,6 one that takes account not of the variation in the value itself but the difference between that variation and the expected variation. More speciﬁcally, this value at risk (for the duration t and the probability level q) is deﬁned as the amount (generally negative) termed VaR ∗ , so that the variation observed during the interval [0; t] will only be less than the average upward variation in |VaR ∗ | with a probability of (1 − q). Thus, if the expected variation is expressed as E(pt ), the deﬁnition Pr[pt − E(pt ) ≤ VaR ∗ ] = 1 − q. Or, again: Pr[pt > VaR ∗ + E(pt )] = q. It is evident that these two concepts are linked, as we evidently have VaR = VaR ∗ + E(pt ). 6.2.2 Case of a normal distribution In the speciﬁc case where the random variable pt follows a normal law with mean E(pt ) and standard deviation σ (pt ), the deﬁnition can be changed to: Pr

VaR q − E(pt ) pt − E(pt ) ≤ =1−q σ (pt ) σ (pt )

VaR q − E(pt ) is the quantile of the standard normal σ (pt ) distribution, ordinarily expressed as z1−q . As z1−q = −zq , this allows VaR to be written in a very simple form VaR q = E(pt ) − zq · σ (pt ) according to the expectation and standard deviation for the loss. In the same way, the parameter VaR ∗ is calculated simply, for a normal distribution, VaR q ∗ = −zq · σ (pt ). The values of zq are found in the normal distribution tables.7 A few examples of these values are given in Table 6.2. This shows that the expression

Table 6.2 Normal distribution quantiles q 0.500 0.600 0.700 0.800 0.850 0.900 0.950 0.960 0.970 0.975 0.980 0.985 0.990 0.995 6 7

zq 0.0000 0.2533 0.5244 0.8416 1.0364 1.2816 1.6449 1.7507 1.8808 1.9600 2.0537 2.1701 2.3263 2.5758

Jorion P., Value At Risk, McGraw-Hill, 2001. Pearson E. S. and Hartley H. O., Biometrika Tables for Statisticians, Biometrika Trust, 1976, p. 118.

Theory of VaR

189

Example If a security gives an average proﬁt of 100 over the reference period with a standard deviation of 80, we have E(pt ) = 100 and σ (pt ) = 80, which allows us to write: VaR 0.95 = 100 − (1.6449 × 80) = −31.6 VaR 0.975 = 100 − (1.9600 × 80) = −56.8 VaR 0.99 = 100 − (2.3263 × 80) = −86.1 The loss incurred by this security will only therefore exceed 31.6 (56.8 and 86.1 respectively) ﬁve times (2.5 times and once respectively) in 100 times. Note It has been indicated in Section 6.1.2 that the normality hypothesis was far from being valid in all circumstances. In particular, it has been shown that the daily returns on equities are better represented by a Pareto or Student distribution,8 that is, leptokurtic distributions. Thus, for a Student distribution with ν degrees of freedom (where ν > 2), the variance is σ2 = 1 +

2 ν −2

and the kurtosis coefﬁcient (for ν > 4) will then be: γ2 =

6 ν−4

This last quantity is always positive, and this proves that the Student distribution is leptokurtic in nature. With regard to the number of degrees of freedom ν, Table 6.3 shows the coefﬁcient γ2 and the quantiles zq for q = 0.95, q = 0.975 and q = 0.99 relative to these Student distributions,9 reduced beforehand (the variable is divided by its standard deviation) in order to make a useful comparison between these ﬁgures and those obtained on the basis of the reduced normal law. Table 6.3

Student distribution quantiles

ν

γ2

z0.95

z0.975

z0.99

6.00 1.00 0.55 0.38 0.29 0.23 0.17 0.11 0.05 0

2.601 2.026 1.883 1.818 1.781 1.757 1.728 1.700 1.672 1.645

3.319 2.491 2.289 2.199 2.148 2.114 2.074 2.034 1.997 1.960

4.344 3.090 2.795 2.665 2.591 2.543 2.486 2.431 2.378 2.326

5 10 15 20 25 30 40 60 120 normal

8 Blattberg R. and Gonedes N., A comparison of stable and student distributions as statistical models for stock prices, Journal of Business, Vol. 47, 1974, pp. 244–80. 9 Pearson E. S. and Hartley H. O., Biometrika Tables for Statisticians, Biometrika Trust, 1976, p. 146.

190

Asset and Risk Management

This clearly shows that when the normal law is used in place of the Student laws, the VaR parameter is underestimated unless the number of degrees of freedom is high. Example With the same data as above, that is, E(pt ) = 100 and σ (pt ) = 80, and for 15 degrees of freedom, we ﬁnd the following evaluations of VaR, instead of 31.6, 64.3 and 86.1 respectively. VaR 0.95 = 100 − (1.883 × 80) = −50.6 VaR 0.975 = 100 − (2.289 × 80) = −83.1 VaR 0.99 = 100 − (2.795 × 80) = −123.6

6.3 VaR FOR A PORTFOLIO 6.3.1 General results Consider a portfolio consisting of N assets in respective quantities10 n1 , . . . , nN . If the price of the j th security is termed pj , the price pP of the portfolio will of course be given by: N pP = n j pj j =1

The price variation will obey the same relation: pP =

N

nj pj

j =1

Once the distribution of the various pj elements is known, it is not easy to determine the distribution of the pP elements: the probability law of a sum of random variables will only be easy to determine if these variables are independent, and this is clearly not the case here. It is, however, possible to ﬁnd the expectation and variance for pP on the basis of expectation, variance and covariance in the various pj elements: E(pP ) =

N

nj E(pj )

j =1

var(pP ) =

N N

ni nj cov(pi , pj )

i=1 j =1 10 It can be shown that when prices are replaced by returns, the numbers nj of assets in the portfolio must be replaced by proportions Xj (positive numbers the sum of which is 1), representing the respective stock-exchange capitalisation levels of the various securities (see Chapter 3).

Theory of VaR

191

where we have, when the two indices are equal: cov(pi , pi ) = var(pi ) The relation that gives var (pP ) is the one that justiﬁes the principle of diversiﬁcation in portfolio management: the imperfect correlations ( 0 37

Appendix 4 sets out the theoretical bases for this method in brief. Gnedenko B. V., On the limit distribution for the maximum term in a random series, Annals of Mathematics, Vol. 44, 1943, pp. 423–53. Galambos J., Advanced Probability Theory, M. Dekker, 1988, Section 6.5. Jenkinson A. F., The frequency distribution of the annual maximum (or minimum) value of meteorological elements, Quarterly Journal of the Royal Meteorology Society, Vol 87, 1955, pp. 145–58. 38

VaR Estimation Techniques

231

and βn (n = 1, 2, . . .) such as the limit (for n → ∞) for the random variable Yn =

max(X1 , . . . , Xn ) − βn αn

is not degenerated, this variable will allow a law of probability that depends on a real parameter τ and deﬁned by the distribution function: 0 if y ≤ τ1 when τ < 0 1 y > τ when τ < 0 1 y real when τ = 0 FY (y) = exp −(1 − τy) τ if y < τ1 when τ > 0 1 if y ≥ τ1 when τ > 0 This is independently of the common distribution of the Xi totals.39 The probability law involved is the generalised Pareto distribution. The numbers αn , βn and τ are interpreted respectively as a dispersion parameter, a location parameter and a tail parameter (see Figure 7.6). Thus, τ < 0 corresponds to Xi values with a fat tail distribution (decreasing less than exponential), τ = 0 has a thin tail distribution (exponential decrease) and τ > 0 has a zero tail distribution (bounded support). 7.4.2.2 Estimation of parameters by regression The methods40 that allow the αn , βn and τ parameters to be estimated by regression use the fact that this random variable Yn (or more precisely, its distribution) can in practice be estimated by sampling on a historical basis: N periods each of a duration of n will supply N values for the loss variable in question. fY(y)

an

bn

0

t

y

Figure 7.6 Distribution of extremes When τ = 0, (1 − τy)1/τ is interpreted as being equal to its limit e−y . Gumbel F. J., Statistics of Extremes, Columbia University Press, 1958. Longin F. M., Extreme value theory: presentation and ﬁrst applications in ﬁnance, Journal de la Soci´et´e Statistique de Paris, Vol. 136, 1995, pp. 77–97. Longin F. M., The asymptotic distribution of extreme stock market returns, Journal of Business, No. 69, 1996, pp. 383–408. Longin F. M., From value at risk to stress testing: the extreme value approach, Journal of Banking and Finance, No. 24, 2000, pp. 1097–130. 39 40

232

Asset and Risk Management

We express the successive observations of the variation in value variable as x1 , x2 , . . ., xNn , and the extreme value observed for the i th ‘section’ of observations as yˆi,n (i = 1, . . . , N ). yˆ1,n = max(x1 , . . . , xn ) yˆ2,n = max(xn+1 , . . . , x2n ) ······ yˆN,n = max(x(N−1)n+1 , . . . , xNn ) Let us arrange these observations in order of increasing magnitude, expressing the values thus arranged yi (i = 1, . . . , N ): y1 ≤ y2 ≤ . . . ≤ yN It is therefore possible to demonstrate that if the extremes observed are in fact a representative sample of the law of probability given by the extreme value theorem, we have yi − βn i FY + ui i = 1, . . . , N = αn N +1 where the ui values correspond to a normal zero-expectation law. When this relation is transformed by taking the iterated logarithm (logarithm of logarithm) for the two expressions, we obtain: yi − βn i − ln − ln + εi = − ln − ln FY N +1 αn 1 τ y − β n + εi = − ln − ln exp − 1 − τ i αn 1 yi − βn τ + εi = − ln 1 − τ αn =

1 {ln αn − ln[αn − τ (yi − βn )]} + εi τ

This relation constitutes a nonlinear regressive equation for the three parameters αn , βn and τ . Note that when we are dealing with a matter of distribution of extremes with a thin tail (τ parameter not signiﬁcantly different from 0), we have: FY (y) = exp[− exp(−y)], and another regressive relationship has to be used: yi − βn i − ln − ln + εi = − ln − ln FY N +1 αn y − βn = − ln − ln exp − exp − i + εi αn y − βn = i + εi αn

VaR Estimation Techniques

233

7.4.2.3 Estimating parameters using the semi-parametric method As well as the technique for estimating the αn , βn and τ parameters, there are nonparametric methods,41 speciﬁcally indicated for estimating the tail parameter τ . They are, however, time consuming in terms of calculation, as an intermediate parameter has to be estimated using a Monte Carlo-type method. We show the main aspects here. The i th observation is termed x(i) after the observations are arranged in increasing order: x(1) ≤ . . . ≤ x(n) . The ﬁrst stage consists of setting a limit M so that only the M highest observations from the sample (of size n) will be of interest in shaping the tail distribution. It can be shown42 that an estimator (as termed by Hill) for the tail parameter is given by: M 1 τˆ = ln x(n−k+1) − ln x(n−M) M k=1

The choice of M is not easy to make, as the quality of Hill’s estimator is quite sensitive to the choice of this threshold. If this threshold is ﬁxed too low, the distribution tail will be too rich and the estimator will be biased downwards; if it is ﬁxed too high, there will only be a small number of observations to use for making the estimation. The optimal choice of M can be made using a graphic method43 or the bootstrap method44 , which we will not be developing here. An estimator proposed by Danielsson and De Vries for the limit distribution function is given by: 1 τˆ M x (n−M) FˆY (y) = 1 − n y This relation is variable for y ≥ x(n−M) only. 7.4.2.4 Calculation of VaR Once the parameters have been estimated, the VaR parameter can then be determined. We explain the procedure to be followed when the tail model is estimated using the semiparametric method,45 presenting a case of one risk factor only. Of course we will invert the process to some extent, given that it is the left extremity of the distribution that has to be used. The future value of the risk is estimated in exactly the same way as for the historical simulation: X(t) (1) = X(0) + (t) · X(0) 41

t = −T + 1, . . . , −1, 0

Beirland J., Teugels J. L. and Vynckier P., Practical Analysis of Extreme Values, Leuven University Press, 1996. Hill B. M., A simple general approach to inference about the tail of a distribution, Annals of Statistics, Vol. 46, 1975, pp. 1163–73. Pickands J., Statistical inference using extreme order statistics, Vol. 45, 1975, pp. 119–31. 43 McNeil A. J., Estimating the tails of loss severity distributions using extreme value theory, Mimeo, ETH Zentrum Zurich. 1996. 44 Danielsson J. and De Vries C., Tail index and quantile estimation with very high frequency data, Mimeo, Iceland University and Tinbergen Institute Rotterdam, 1997. Danielsson J. and De Vries C., Tail index and quantile estimation with very high frequency data, Journal of Empirical Finance, No. 4, 1997, pp. 241–57. 45 Danielsson J. and De Vries C., Value at Risk and Extreme Returns, LSE Financial Markets Group Discussion Paper 273, London School of Economics, 1997. Embrechts P., Kl¨uppelberg C. and Mikosch T., Modelling Extreme Events for Insurance and Finance, Springer Verlag, 1999. Reiss R. D. and Thomas M., Statistical Analysis of Extreme Values, Birkhauser Verlag, 2001. 42

234

Asset and Risk Management

The choice of M is made using one of the methods mentioned above, and the tail parameter is estimated by: M 1 ln x(k) − ln x(M+1) τˆ = M k=1 The adjustment for the left tail of the distribution is made by M FˆY (y) = n

x(M+1) y

1 τˆ

This relation is valid for y ≥ x(M+1) only. The distribution tail is simulated46 by taking a number of values at random from the re-evaluated distribution of the X(t) (1) values and by replacing each x value lower than x(M+1) by the corresponding value obtained from the distribution of the extremes, that is, for the level of probability p relative to x, through the xˆp solution of the equation M p= n

x(M+1) xˆp

In other words: xˆp = x(M+1)

M np

1 τˆ

τ

.

Note Extreme value theory, which allows the adverse effects of one or more outliers to be avoided, has a serious shortcoming despite its impressive appearance. The historical period that has to be used must have a very long duration, regardless of the estimation method used. In fact: • In the method based on nonlinear regression, Nn observations must be lodged in which the duration n on which an extreme value is measured must be relatively long. The extreme value theorem is an asymptotic theorem in which the number N of durations must be large if one wishes to work with a sample distribution that is representative of the actual distribution of the extremes. • In the semi-parametric method, a large number of observations are abandoned as soon as the estimation process starts.

7.5 ADVANTAGES AND DRAWBACKS We now move on to review the various advantages and drawbacks of each VaR estimation technique. To make things simpler, we will use the abbreviations shown in Figure 7.2: VC for the estimated variance–covariance matrix method, MC for the Monte Carlo simulation and HS for historical simulation. 46 This operation can also be carried out by generating a uniform random variable in the interval [0; 1], taking the reciprocal for the observed distribution of X(t) (1) and replacing the observed values lower than x(M+1) by the value given by the extremes distribution.

VaR Estimation Techniques

235

7.5.1 The theoretical viewpoint 7.5.1.1 Hypotheses and limitations (1) Let us envisage ﬁrst of all the presence or absence of a distributional hypothesis and its likely impact on the method. MC and HS do not formulate any distributional hypothesis. Only VC assumes that variations in price are distributed according to a normal law.47 Here, this hypothesis is essential: • because of the technique used to split the assets into cashﬂows: only multinormal distribution is such that the sum of the variables, even when correlated, is still distributed according to such a law; • because the information supplied by RiskMetrics includes the −zq σk values (k = 1, . . . , n) of the VaR ∗ parameter for each risk factor, and zq = 1.645, that is, the normal distribution quantile for q = 0.95. This hypothesis has serious consequences for certain assets such as options, as the returns are highly skewed and the method can no longer be applied. It is for this reason that RiskMetrics introduced a method based on the quantile concept for this type of asset, similar to MC and VC. For simpler assets such as equities, it has been demonstrated that the variations in price are distributed according to a leptokurtic law (more pointed than the normal close to the expectation, with thicker tails and less probable intermediate values). The normality hypothesis asserts that the VaR value is underestimated for such leptokurtic distributions, because of the greater probability associated with the extreme values. This phenomenon has already been observed for the Student distributions (see Section 6.2.2). It can also be veriﬁed for speciﬁc cases. Example Consider the two distributions in Figure 6.2 (Section 6.1.2), in which the triangular, deﬁned by √ √ √ 3 − |x| x ∈ − 3; 3 f1 (x) = 3 has thicker tails than the rectangular for which √ √ √ 6 6 6 f2 (x) = x∈ − ; 6 2 2 Table 7.6 shows a comparison of two distributions. The phenomenon of underestimation of risk for leptokurtic distributions is shown by the fact that: √ VaR q (triangular) = 6(1 − q) − 3 √ 6 VaR q (rectangular) = 6(1 − q) − > VaR q (triangular) 2 47

In any case, it formulates the conditional normality hypothesis (normality with changes in variation over time).

236

Asset and Risk Management Table 7.6 Comparison of two distributions Triangular µ σ2 γ1 γ2

0 0.5 0 −0.6

Rectangular 0 0.5 0 −1.2

In addition, numerical analyses carried out by F. M. Longin have clearly shown the underestimation of the VaR for leptokurtic distributions under normality hypothesis. They have also shown that the underestimation increases as q moves closer to the unit; in other words, we have an interest in extreme risks. Thus, for a market portfolio represented by an index, he calculated that: VaR 0.5 (HS) = 1.6 VaR 0.5 (VC) VaR 0.75 (HS) = 2.1 VaR 0.75 (VC) VaR 0.95 (HS) = 3.5 VaR 0.95 (VC) VaR 0.99 (HS) = 5.9 VaR 0.99 (VC) This problem can be solved in VC by the consideration, in the evolution model for the variable return Rj t = µj + σt εj t , for a residual distributed not normally but in accordance with a generalised law of errors smoother than the normal law. In the MC and HS methods, the normality hypothesis is not formulated. In MC, however, if the portfolio is not re-evaluated by a new simulation, the hypothesis will be required but only for this section of the method. (2) The VC method, unlike MC and HS, relies explicitly on the hypothesis of asset price linearity according to the risk factors. This hypothesis forms the basis for the principle of splitting assets into cashﬂows. It is, however, ﬂawed for certain groups of assets, such as options: the linear link between the option price and the underlying equity price assumes that is the only non-zero sensitivity parameter. For this reason, RiskMetrics has abandoned the VC methodology and deals with this type of product by calling on Taylor’s development. Another estimation technique, namely MC, is sometimes indicated for dealing with this group of assets. (3) The hypothesis of stationarity can take two forms. In its more exacting form, it suggests that joint (theoretical and unknown) distribution of price variations in different risk factors, for the VaR calculation horizon, is well estimated in the observations of variations in these prices during the historical period available. The hypothesis of stationarity is formulated thus for the HS method. However, if it is not veriﬁed because of the presence of a trend in the observed data, it is easy to take account of the trend for estimating the future value of the portfolio. A ‘softer’ form is recommended for applying the VC method, as this method no longer relates to the complete distribution; the statistical parameters measured on the observed distribution of the price (or return) variances are good estimations of these same (unknown) parameters for the horizon for which the VaR is being estimated. The VC

VaR Estimation Techniques

237

method does, however, have the drawback of being unable to depart from this hypothesis if a trend is present in the data. (4) In the presentation of the three estimation methods, it is assumed that the VaR calculation horizon was equal to the periodicity of the historical observations.48 The usual use of VaR involves making this period equal to one day for the management of dealing room portfolios and 10 days according to prudential regulations,49 although a longer period can be chosen when measuring the risk associated with stable products such as investment funds. If, on the other hand, one wishes to consider a horizon (say one month) longer than the observation period (say one day), three methods may be applied: • Estimating the VaR on the basis of monthly returns, even if the data are daily in nature. This leads to serious erosion of the accuracy of the initial observations. • Using the formulae set out in the note in Section 7.1.2, which consist of multiplying the expectation loss and the loss variance respectively by the horizon (here, the number of working days in the month) and the square root of the horizon. This is of course only valid with a hypothesis of independence of daily variations and for methodologies that calculate the VaR on the basis of these two parameters only (case of normal distribution) such as VC. When HS cannot rely on the normality hypothesis, this method of working is incorrect50 and the previous technique should be applied. • For MC and for this method only, it is possible to generate not only a future price value but a path of prices for the calculation horizon. We now explain this last case a little further, where, for example, the price evolution of an equity is represented by geometric Brownian motion (see Section 3.4.2): St+dt − St = St · (ER · dt + σR · dwt ) where the Wiener process (dwt ) obeys a law with a zero expectation and a variance equal to dt. If one considers a normal random variable ε with zero expectation and variance of 1, we can write: √ St+dt − St = St · (ER · dt + σR · ε dt) Simulation of a sequence of independent values for ε using the Monte Carlo method allows the variations St+dt − St to be obtained, and therefore, on the basis of the last price observed S0 , allows the path of the equity’s future price to be generated for a number of dates equal to the number of ε values simulated.51 7.5.1.2 Models used (1) The valuation models play an important part in the VC and MC methods. In the case of VC, they are even associated with a conditional normality hypothesis. For MC, the 48 The usual use of VaR involves making this period equal to one day. However, a longer period can be chosen when measuring the risk associated with stable products such as investment founds. 49 This 10-day horizon may, however, appear somewhat unrealistic when the speed and volume of the deals conducted in a dealing room is seen. 50 As is pointed out quite justiﬁably by Hendricks D., Evaluation of value at risk models using historical data, FRBNY Policy Review, 1996, pp. 39–69. 51 The process that we have described for MC is also applicable, provided sufﬁcient care is taken, for a one-day horizon, with this period broken down into a small number of subperiods.

238

Asset and Risk Management

search for a model is an essential (and difﬁcult) part of the method; however, as there is a wide variety of models on offer, there is some guarantee as to the quality of results. Conversely, the HS method is almost completely independent of these models; at the most, it uses them as a pricing tool for putting together databases for asset prices. Here is one of the many advantages of this method, which have their source in the conceptual simplicity of the technique in question. To sum up, the risk associated with the quality of the models used is: • signiﬁcant and untreatable for VC; • signiﬁcant but manageable for MC; • virtually zero for HS. (2) As VC and HS are based on a hypothesis of stationarity, the MC method is the only one to make intensive use of asset price development models over time (dynamic models). These models can improve the results of this method, provided the models are properly adapted to the data and correctly estimated. 7.5.1.3 Data The data needed for supplying the VC methods in its RiskMetrics version are: • the partial VaRs for each of the elementary risks; • the correlation matrix for the various risk-factor couples. Thus, for n risk factors, n(n + 1)/2 different data are necessary. If for example one is considering 450 elementary risk factors, 101 475 different data must be determined daily. Note that the RiskMetrics system makes all these data available to the user. The MC method consumes considerably less data; in addition to the history of the various risk factors, a number of correlations (between risk factors that explain the same asset) are essential. However, if the portfolio is re-evaluated by a new simulation in order to avoid the normality hypothesis, the variance–covariance matrix for the assets in the portfolio will be essential. Finally, the HS method is the least data consuming; as the historical periods already contain the structure of the correlation between risk factors and between assets, this last information does not need to be obtained from an outside body or calculated on the basis of historical periods. 7.5.2 The practical viewpoint 7.5.2.1 Data Most of the data used in the VC method cannot be directly determined by an institution applying a VaR methodology. Although the institution knows the composition of its portfolio and pays close attention to changes in the prices of the assets making up the portfolio, it cannot know the levels of volatility and the correlations of basic risk factors, some of which can only be obtained by consulting numerous outside markets. The VC method can therefore only be effective if all these data are available in addition, which is the case if the RiskMetrics system is used. This will, however, place the business at a

VaR Estimation Techniques

239

disadvantage as it is not provider of the data that it uses. It will not therefore be possible to analyse these data critically, or indeed make any corrections to them in case of error. Conversely, the MC method and especially the HS method will use data from inside the business or data that can be easily calculated on the basis of historical data, with all the ﬂexibility that this implies with respect to their processing, conditioning, updating and control.

7.5.2.2 Calculations Of course, the three methods proposed require a few basic ﬁnancial calculations, such as application of the principle of discounting. We now look at the way in which the three techniques differ from the point of view of calculations to be made. The calculations required for the HS method are very limited and easily programmable on ordinary computer systems, as they are limited to arithmetical operations, sorting processes and the use of one or another valuation model when the price to be integrated into the historical process is determined. The VC method makes greater use of the valuation models, since the principle of splitting assets and mapping cashﬂows is based on this group of models and since options are dealt with directly using the Black and Scholes model. Most notably, these valuation models will include regressions, as equity values are expressed on the basis of national stock-exchange indices. In addition, the matrix calculation is made when the portfolio is re-evaluated on the basis of the variance–covariance matrix. In contrast to these techniques, which consume relatively little in terms of calculations (especially HS), the MC method requires considerable calculation power and time: • valuation models (including regressions), taking account of changes over time and therefore estimations of stochastic process parameters; • forecasting, on the basis of historical periods, of a number of correlations (between the risk factors that explain the same asset on one hand, and between assets in the same portfolio for the purpose of its re-valuation on the other hand); • matrix algebra, including the Choleski decomposition method; • ﬁnally and most signiﬁcantly, a considerable number of simulations. Thus, if M is the number of simulations required in the Monte Carlo method to obtain a representative distribution and the asset for which a price must be generated depends on n risk factors, a total of nM simulations will be necessary for the asset in question. If the portfolio is also revalued by simulation (with a bulky variance–covariance matrix), the number of calculations increases still further.

7.5.2.3 Installation and use The basic principles of the VC method, with splitting of assets and mapping of cashﬂows, cannot be easily understood at all levels within the business that uses the methodology; and the function of risk management cannot be truly effective without positive assistance from all departments within the business. In addition, this method has a great advantage: RiskMetrics actually exists, and the great number of data that supply the system are

240

Asset and Risk Management

Table 7.7 Advantages and drawbacks VC Distributional hypothesis Linearity hypothesis Stationarity hypothesis Horizon Valuation models Dynamic models Required data Source of data Sensitivity Calculation

Set-up Understanding Flexibility Robustness

MC

HS

Conditional normality

No

No

Taylor if options

No

No

Yes

No

1 observation period Yes (unmanageable risk) No –Partial VaR ∗ –Correlation matrix External Average –Valuation models –Matrix calculation

Paths (any duration) Yes (manageable risk) Yes Histories (+ var-cov. of assets) In house Average –Valuation models –Statistical estimates –Matrix calculation –Simulations Difﬁcult Average Low Good

Method to be adapted if trend 1 observation period External External Historices

Easy Difﬁcult Low Too many hypotheses

In house Outliers External valuation models Easy Easy Good Good

also available. The drawback, of course, is the lack of transparency caused by the external origin of the data. Although the basic ideas of the MC method are simple and natural, putting them into practice is much more problematic, mainly because of the sheer volume of the calculation. The HS method relies on theoretical bases as simple and natural as those of the MC method. In addition, the system is easy to implement and its principles can be easily understood at all levels within a business, which will be able to adopt it without problems. In addition, it is a very ﬂexible methodology: unlike the other methods, which appear clumsy because of their vast number of calculations, aggregation can be made at many different levels and used in many different contexts (an investment fund, a portfolio, a dealing room, an entire institution). Finally, the small number of basic hypotheses and the almost complete absence of complex valuation models makes the HS method particularly reliable in comparison with MC and especially VC. Let us end by recalling one drawback of the HS method, inherent in the simplicity of its design: its great sensitivity to quality of data. In fact, one or a few more outliers (whether exceptional in nature or caused by an error) will greatly inﬂuence the VaR value over a long period (equal to the duration of the historical periods). It has been said that extreme value theory can overcome this problem, but unfortunately, the huge number of calculations that have to be made when applying it is prohibitive. Instead, we would recommend that institutions using the HS method set

VaR Estimation Techniques

241

up a very rigorous data control system and systematically analyse any exceptional observations (that is, outliers); this is possible in view of the internal nature of the data used here. 7.5.3 Synthesis We end by setting out a synoptic table52 shown in Table 7.7 of all the arguments put forward.

52 With regard to the horizon for the VC method, note that the √ VAR can be obtained for a longer horizon H than the periodicity of observations, by multiplying the VAR for a period by H , except in the case of optional products.

8 Setting Up a VaR Methodology The aim of this chapter is to demonstrate how the VaR can be calculated using the historical simulation method. So that the reader can work through the examples speciﬁcally, we felt it was helpful to include a CD-ROM of Excel spreadsheets in this book. This ﬁle, called ‘CH8.XLS’, contains all the information relating to the examples dealt with below. No part of the sheets making up the ﬁle has been hidden so that the calculation procedures are totally transparent. The examples presented have been deliberately simpliﬁed; the actual portfolios of banks, institutions and companies will be much more complex than what the reader can see here. The great variety of ﬁnancial products, and the number of currencies available the world over, have compelled us to make certain choices. In the ﬁnal analysis, however, the aim is to explain the basic methodology so that the user can transpose historical simulation into the reality of his business. Being aware of the size of some companies’ portfolios, we point out a number of errors to be avoided in terms of simpliﬁcation.

8.1 PUTTING TOGETHER THE DATABASE 8.1.1 Which data should be chosen? Relevant data is fundamental. As VaR dreals with extreme values in a series of returns, a database error, which is implicitly extreme, will exert its inﬂuence for many days. The person responsible for putting together the data should make a point of testing the consistency of the new values added to the database every day, so that it is not corrupted. The reliability of data depends upon: • the source (internal or external); • where applicable, the sturdiness of the model and the hypotheses that allow it to be determined; • awareness of the market; • human intervention in the data integration process. Where the source is external, market operators will be good reference points for specialist data sources (exchange, long term, short term, derivatives etc.). Sources may be printed (ﬁnancial newspapers and magazines) or electronic (Reuters, Bloomberg, Telerate, Datastream etc.). Prices may be chosen ‘live’ (what is the FRA 3–6 USD worth on the market?) or calculated indirectly (calculation of forward-forward on the basis of three and six months Libor USD, for example). The ultimate aim is to provide proof of consistency as time goes on. On a public holiday, the last known price will be used as the price for the day.

244

Asset and Risk Management

8.1.2 The data in the example We have limited ourselves to four currencies (EUR, PLN, USD and GBP), in weekly data. For each of these currencies, 101 dates (from 19 January 2001 to 20 December 2002) have been selected. For these dates, we have put together a database containing the following prices: • 1, 2, 3, 6 and 12, 18, 24, 36, 48 and 60 months deposit and swap rates for EUR and PLN, and the same periods but only up to 24 months for USD and GBP. • Spot rates for three currency pairs (EUR/GBP, EUR/PLN and EUR/USD). The database contains 3737 items of data.

8.2 CALCULATIONS 8.2.1 Treasury portfolio case The methodology assumes that historical returns will be applied to a current portfolio in order to estimate the maximum loss that will occur, with a certain degree of conﬁdence, through successive valuations of that portfolio. The ﬁrst stage, which is independent of the composition of the portfolio, consists of determining the past returns (in this case, weekly returns). 8.2.1.1 Determining historical returns As a reminder (see Section 3.1.1), the formula that allows the return to be calculated1 is: Rt =

Ct − Ct−1 Ct−1

For example, the weekly return for three months’ deposit USD between 19 January 2001 and 26 January 2001 is: 0.05500 − 0.05530 = −0.5425 % 0.05530 The results of applying the rates of return to the databases are found on the ‘Returns’ sheet within CH8.XLS. For 101 rates, 100 weekly returns can be determined. 8.2.1.2 Composition of portfolio The treasury portfolio is located on the ‘Portfolios’ sheet within CH8.XLS. This sheet is entirely ﬁctitious, and has base in economic reality either for the dates covered by the sample and even less at the time you read these lines. The only reality is the prices and rates that prevail at the dates chosen (and de facto the historical returns). The investor’s currency is the euro. The term ‘long’ (or ‘short’) indicates: • in terms of deposits, that the investor has borrowed (lent). 1

We were able to use the other expression for the return, that is, ln

Ct (see also Section 3.1.1). Ct−1

Setting Up a VaR Methodology

245

• in terms of foreign exchange, that the investor has purchased (sold) the ﬁrst currency (EUR in the case of EUR/USD) in exchange for the second currency in the pair. We have assumed that the treasury portfolio for which the VaR is to be calculated contains only new positions for the date on which the maximum loss is being estimated: 20 December 2002. In a real portfolio, an existing contract must of course be revalued in relation to the period remaining to maturity. Thus, a nine-month deposit that has been running for six months (remaining period therefore three months) will require a database that contains the prices and historical returns for three-month deposits in order to estimate the maximum loss for the currency in question. In addition, some interpolations may need to be made on the curve, as the price for some broken periods (such as seven-month deposits running for four months and 17 days) does not exist in the market. Therefore, for each product in the treasury portfolio, we have assumed that the contract prices obtained by the investor correspond exactly to those in the database on the date of valuation. The values in Column ‘J’ (‘Initial Price’) in the ‘Portfolios’ sheet in CH8.XLS for the treasury portfolio will thus correspond to the prices in the ‘Rates’ sheet in CH8.XLS for 20 December 2002 for the products and currencies in question. 8.2.1.3 Revaluation by asset type We have said that historical simulation consists of revaluing the current portfolio by applying past returns to that portfolio; the VaR is not classiﬁed and determined until later. Account should, however, be taken of the nature of the product when applying the historical returns. Here, we are envisaging two types of product: • interest-rate products; • FX products. A. Exchange rate product: deposit Introduction: Calculating the VBP We saw in Section 2.1.2 that the value of a basis point (VBP) allowed the sensitivity of an interest rate position to a basis point movement to be calculated, whenever interest rates rise or fall. Position 1 of the treasury portfolio (CH8.XLS, ‘Portfolios’ sheet, line 14) is a deposit (the investor is ‘long’) within a GBP deposit for a total of GBP50 000 000 at a rate of 3.9400 %. The investor’s interest here is in the three-month GBP rate increasing; he will thus be able to reinvest his position at a more favourable rate. Otherwise, the position will make a loss. More generally, however, it is better to pay attention to the sensitivity of one’s particular position. The ﬁrst stage consists of calculating the interest on the maturity date: I =C·R·

ND DIV

246

Asset and Risk Management

Here: I represents the interest; C represents the nominal; R represents the interest rate; ND represents the number of days in the period; DIV represents the number of days in a year for the currency in question. I = 50 000 000 × 0.0394 × 90/365 = 485 753.42. Let us now assume that the rates increase by one basis point. The interest cashﬂow at the maturity date is thus calculated on an interest rate base of: 0.0394 + 0.0001 = 0.0395. We therefore obtain: I = 50 000 000 × 0.0395 × 90/365 = 486 986.30 As the investor in the example is ‘long’, that is, it is better for him to lend in order to cover his position, he will gain: I = 50 000 000 × 0.0001 × 90/365 = |485 753.42 − 486 986.30| = 1 232.88 GBP every time the three-month GBP rate increases by one basis point. Historical return case The VBP assumes a predetermined variation of one basis point every time, either upwards or downwards. In the example, this variation equals a proﬁt (rise) or loss (fall) of 1232.88 GBP. In the same way, we can apply to the current rate for the position any other variation that the investor considers to be of interest: we stated in Section 2.1.2 that this was the case for simulations (realistic or catastrophic). However, if the investor believes that the best forecast2 of future variations in rates is a variation that he has already seen in the past, all he then needs to do is apply a series of past variations to the current rate (on the basis of past returns) and calculate a law of probability from that. On 19 January 2001, the three-month GBP was worth 5.72 %, while on 26 January 2001 it stood at 5.55 %. The historical return is −2.9720 % (‘Returns’ sheet, cell AG4). This means that: 0.0572 × (1 + (−0.02972)) = 0.0555. If we apply this past return to the current rate for the position (‘Portfolios’ sheet, cell J14), we will have: 0.0394 × (1 + (−0.02972)) = 0.038229. This rate would produce interest of: I = 50 000 000 × 0.038229 × 90/365 = 471 316.70. As the investor is ‘long’, this drop in the three-month rate would produce a loss in relation to that rate, totalling: 471 316.70 − 485 753.42 = −14 436.73 GBP. The result is shown on the ‘Treasury Reval’ sheet, cell D3. Error to avoid Some people may be tempted to proceed on the basis of difference from past rate: 0.055500 − 0.057200 = −0.0017. And then to add that difference to the current rate: 2 The argument in favour of this assumption is that the variation has already existed. The argument against, however, is that it cannot be assumed that it will recur in the future.

Setting Up a VaR Methodology

247

0.0394 − 0.0017 = 0.0377. This would lead to a loss of: I = 50 000 000 × (−0.0017) × 90/365 = −20 958.90. This is obviously different from the true result of −14 436.73. This method is blatantly false. To stress the concepts once again, if rates moved from 10 % to 5 % within one week (return of −50 %) a year ago, with the differential applied to a current position valued at a rate of 2 %, we would have a result of: 0.02 × (1 − 0.50) = 0.01 with the right method. 0.02 − (0.10 − 0.05) = −0.03 with the wrong method. In other words, it is best to stick to the relative variations in interest rates and FX rates and not to the absolute variations. B. FX product: spot Position 3 in the treasury portfolio (CH8.XLS, ‘Portfolios’ sheet, line 16) is a purchase (the investor is ‘long’) of EUR/USD for a total of EUR75 000 000 at a price of USD1.0267 per EUR. Introduction: calculating the value of a ‘pip’ A ‘pip’ equals one-hundredth of a USD cent in a EUR/USD quotation, that is, the fourth ﬁgure after the decimal point. The investor is ‘long’ as he has purchased euros and paid for the purchase in USD. A rise (fall) in the EUR/USD will therefore be favourable (unfavourable) for him. In the same way as the VBP for rate products, the sensitivity of a spot exchange position can be valued by calculating the effect of a variation in a ‘pip’, upwards or downwards on the result for the position. The calculations are simple: 75 000 000 × 1.0267 = 77 002 500 75 000 000 × 1.0268 = 77 010 000 77 010 000 − 77 002 500 = 7500 Example of historical returns On 19 January 2001, the spot EUR/USD was worth 0.9336, while on 26 January 2001 it stood at 0.9238. The historical return (‘Returns’ sheet, cell AO4) is −1.0497 %. This means that: 0.9336 × (1 + (−1.0497)) = 0.9238. By applying Position 3 of the treasury portfolio to the current rate, we have: 1.0267 × (1 + (−1.0497)) = 1.01592273. The investor’s position is ‘long’, so a fall in the EUR/USD rate will be unfavourable for him, and the loss (in USD) will be: 75 000 000 × ((1.0267 × (1 + (−1.0497)) − 1.0267)) = 75 000 000 × (1.01592273 − 1.0267) = −808 295.31 This result is displayed in cell F3 of the ‘Treasury Reval’ sheet. 8.2.1.4 Revaluation of the portfolio The revaluation of the treasury portfolio is shown in the table produced by cells from B2 to G102, on the ‘Treasury Reval’ sheet.

248

Asset and Risk Management

For each of the positions, from 1–3, we have applied 100 historical returns (from 26 January 2001 to 20 December 2002) in the currency in question (GBP, USD, EUR). The total shown is the loss (negative total) or proﬁt (positive total) as calculated above taking account of past returns. Let us take as an example the ﬁrst revaluation (corresponding to the 26 January 2001 return) for Position 1 of the portfolio (cell D3 in the ‘Treasury Reval’ sheet). The formula that allows the loss or proﬁt to be calculated consists of the difference in interest receivable at the current (initial) price of the position and the interest receivable in view of the application to the initial price of the corresponding historical return on 26 January 2001. We therefore have the general formula: ND ND − C·R· L = C · R · (1 + HR) · DIV DIV ND ND =C· R · (1 + HR) · − R· DIV DIV Here: L is the loss; C is the total to which the transaction relates; L is the current rate (initial price) of the transaction; HR is the historical return. It is this last formula that is found in cells D3 to F102. Of course we could have simpliﬁed3 it here: ND ND L=C· R · (1 + HR) · − R· DIV DIV ND · ((1 + HR) − 1) DIV ND =C·R· · HR DIV =C·R·

If the investor is ‘long’, he has borrowed and will wish to cover himself by replacing his money at a higher rate than the initial price. Therefore, if HR is a negative (positive) total, and he has realised a loss (proﬁt). This is the case for Position 1 of the portfolio on 26 January 2001. On the other hand, if the investor is ‘short’, he has lent and will wish to cover himself by borrowing the money at a lower rate than the initial price. Therefore, if HR is a negative (positive) total, P must be positive (negative) and the preceding formula (valid if the investor is ‘long’) must be multiplied by −1. For Position 2 of the portfolio, the investor is ‘short’ and we have (cell E3 of the ‘Treasury Reval’ sheet: ND ND − R· L = (−1) · C · R · (1 + HR) · DIV DIV 3

We have not simpliﬁed it, so that the various components of the difference can be seen more clearly.)

Setting Up a VaR Methodology

249

On each past date, we have a loss or proﬁt expressed in the currency of the operation for each position. As the investor has the euro for his national or accounting currency, we have summarised the three losses or gains in EUR equivalents at each date. The chosen FX rate for the euro against the other currencies is of course the rate prevailing on the date of calculation of the VaR, that is, 20 December 2002. The overall loss is shown in column G of the ‘Treasury Reval’ sheet. 8.2.1.5 Classifying the treasury portfolio values and determining the VaR When all the revaluations have been carried out, we have (see ‘Treasury Reval’ sheet) a series of 100 losses or proﬁts according to historical return date. One has to classify them in increasing order, that is, from the greatest loss to the smallest. The reader will ﬁnd column G of the ‘Treasury Reval’ sheet classiﬁed in increasing order on the ‘Treasury VaR’ sheet, in column B. To the right of this column, 1 − q appears. A. Numerical interpretation We think it important to state once again that when 1 − q corresponds to a plateau of the loss distribution function, we have chosen to deﬁne VaR as the left extremity of the said section (see Figure 6.7). We therefore say that: • There are 66 chances out of 100 that the actual loss will be −EUR360 822 or less (1 − q = 0.34), or VaR 0.66 = −360 822. • There are 90 chances out of 100 that the actual loss will be −EUR1 213 431 or less (1 − q = 0.10), or VaR 0.90 = −1 213 431. • There are 99 chances out of 100 that the actual loss will be −EUR2 798 022 or less (1 − q = 0.01), or VaR 0.99 = −2 798 022. B. Representation in graphical form If the forecast of losses is shown on the x-axis and 1 − q is shown on the y-axis, the estimated loss distribution will be obtained. Figure 8.1, also appears on the ‘Treasury VaR’ sheet. Treasury VaR 1.00 0.90 0.80 0.70 Probability

0.60 0.50 0.40 0.30 0.20 0.10

–3 000 000

–2 000 000

0.00 –1 000 000 0 Estimated loss

1 000 000

Figure 8.1 Estimated loss distribution of treasury portfolio

2 000 000

250

Asset and Risk Management

8.2.2 Bond portfolio case The ﬁrst stage once again consists of determining the past returns (in this case, weekly). 8.2.2.1 Past variations to be applied The main difﬁculty connected with this type of asset, in terms of determining VaR, is the question of whether or not the historical prices or rates are available. When a bond is ﬁrst issued, for example, it has to be acknowledged that we do not have any historical prices. As the aim of this chapter is merely to show how VaR can be calculated using the historical simulation method, using deliberately simpliﬁed examples, we have used a range of rates for deposits and swaps on the basis of which we will construct our example. We did not therefore wish to use bond historical prices as a basis. A. Yield The price of a bond is known on at least one date: that for which we propose to determine the VaR (in our example, 20 December 2002). Using this price, and by taking into account the calculation date, the maturity date, coupon date, the price on maturity, the basis of calculation and the frequency of the coupon payments, the ‘yield to maturity’ or YTM, can be calculated as shown in Section 4.1.2. Columns H3 to H9 of the ‘Portfolios’ sheet show the relative yields for the bond in our ﬁctitious portfolio. As not all versions of Excel contain the ‘yield’ ﬁnancial function, we have copied the values into columns I3 to I9. It is to this yield to maturity that we intend to apply the variations relative to corresponding rates of deposit and/or swaps, in terms of maturity, for the remaining period of the corresponding bond. We are of course aware that this method is open to criticism as the price, if we had used it, not only reﬂects general interest-rate levels, but also carries a dimension of credit risk and lack of liquidity. B. Interpolation of rates We cannot deduce from the ‘Rates’ sheet the returns to be applied to the yield to maturity; the remaining periods are in fact broken. We have determined (in the ‘Bonds Interp’ sheet), the two maturity dates (columns I and J in that sheet) that straddle the remaining period, together with the portion of rate differential to be added to the lower rate (column F divided by column H). Readers will ﬁnd the value of the rates to be interpolated (taken from the ‘Rates’ sheet) in the ‘Variation Bonds’ sheet, the rate differential to which the rule of interpolation mentioned above is applied. For bond 1 in our portfolio, this calculation is found in column G. All that remains now is to determine the return relative to the series of synthetic rates in exactly the same way as shown in Section 8.2.1. The returns applicable to the yield to maturity for bond 1 in the portfolio are thus shown in column H of the ‘Variation Bonds’ sheet. 8.2.2.2 Composition of portfolio The ‘bond’ portfolio is found on the ‘Portfolios’ sheet in CH8.XLS. This sheet, like the rest of the portfolio, is purely ﬁctitious. The investor’s national currency is the euro.

Setting Up a VaR Methodology

251

The portfolio is ‘long’, with six bonds, for which the following are given: • • • • •

currency; coupon; maturity date; ISIN code; last known price (this is the ‘bid’, because if the position is closed, one should expect to deal at the bid price); • yield on maturity (in formula form in column H, in copied value in column I); • basis of calculation (current/current or 30/360); • frequency of payment of the coupon. 8.2.2.3 Portfolio revaluation In Table B2–H9, the ‘Losses Bonds’ sheet summarises the portfolio data that we need in order to revalue it. Remember that we propose to apply the relative variations in rates (column L for bond 1, column Q for bond 2 etc.), to the yield to maturity (column C) of each bond that corresponds, in terms of maturity, to the period still outstanding. A new yield to maturity is therefore deduced (column M for bond 1); it is simply the current total to which a past variation has been applied. We explained above that starting from the last known price of a bond, and taking account of the date of the calculation as well as the expiry date, the coupon date, the price on maturity, the basis of calculation and the frequency of the coupon, we deduce the yield to maturity. It is possible, in terms of correlations, to start from our ‘historical’ yields to maturity in order to reconstruct a synthesised price (column N). The ‘Price’ function in Excel returns a price on the basis of the given yield to maturity (column M) and of course that of the date of the calculation as well as the expiry date, coupon date, price on maturity, basis of calculation and frequency of coupon. As not all versions of Excel contain the ‘Price’ function, we have copied the values from column N into column O for bond 1, from column S into column T for bond 2, etc. All that now remains is to compare the new price to the last known price, and to multiply this differential by the nominal held in the portfolio in order to deduce the resulting proﬁt or loss (column P for bond 1). As indicated in cell B11, we assume that we are holding a nominal of ¤100 million on each of the six bond lines. Note It may initially seem surprising that the nominal used for bond 1 (expressed in PLN) is also ¤100 million. In fact, rather than expressing the nominal in PLN, calculating the loss or proﬁt and dividing the total again by the same EUR/PLN rate (that is, 3.9908 at 20 December 2002), we have immediately expressed the loss for a nominal expressed in euros. It is then sufﬁcient (column AP) to summarise the six losses and/or proﬁts for each of the 100 dates on each line (with respect for the correlation structure). 8.2.2.4 Classifying bond portfolio values and determining VaR Once all the new valuations have been made, a series of 100 losses or proﬁts (‘Losses Bonds’ sheet) will be shown according to historical return date. One has to classify them

252

Asset and Risk Management Bond portfolio VaR 1.00 0.90 0.80 0.70 Probability

0.60 0.50 0.40 0.30 0.20 0.10

0.00 –1 600 000 –1 100 000 –600 000 –100 000 400 000 Estimated loss

900 000

1 400,000

1 900,000

Figure 8.2 Estimated loss distribution of bond portfolio

in ascending order, that is, from the greatest loss to the smallest. Readers will ﬁnd column AP in the ‘Losses Bonds’ sheet classiﬁed in ascending order on the ‘Bonds VaR’ sheet in column B. 1 − q is located to the right of that column. A. Numerical interpretation We say that: • There are 66 chances out of 100 that the actual loss will be −EUR917 or less (1 − q = 0.34), or VaR 0.66 = −917. • There are 90 chances out of 100 that the actual loss will be −EUR426 740 or less (1 − q = 0.10), or VaR 0.90 = −426 740. • There are 99 chances out of 100 that the actual loss will be −EUR1 523 685 or less (1 − q = 0.01), or VaR 0.99 = −1 523 685. B. Representation in graphical form If the loss estimates are shown on the x-axis and 1 − q is shown on the y-axis, the estimated loss distribution will be obtained. Figure 8.2 also appears on the ‘Bonds VaR’ sheet.

8.3 THE NORMALITY HYPOTHESIS We have stressed the hidden dangers of underestimating the risk where the hypothesis of normality is adopted. In fact, because of the leptokurtic nature of market observations, the normal law tails (VaR being interested speciﬁcally in extreme values) will report the observed historical frequencies poorly, as they will be too ﬂat. It is prudent, when using theoretical forecasts to simplify calculations, to overstate market risks; here, however, the opposite is the case. In order to explain the problem better we have compared the observed distribution for the bond portfolio in CH8.XLS with the normal theoretical distribution. The comparison is found on the ‘Calc N’ sheet (N = normal) and teaches us an interesting lesson with regard to the tails of these distributions.

Setting Up a VaR Methodology

253

We have used the estimated loss distribution of the bond portfolio (copied from the ‘Bonds VaR’ sheet). We have produced 26 categories (from −1 600 000 to −1 465 000, from −1 465 000 to −1 330 000 etc., up to 1 775 000 to 1 910 000) in which each of these 100 losses will be placed. For example, the loss of −1 523 685.01 (cell D4) will belong to the ﬁrst class (from −1 600 000 to −1 465 000, column G). The table G2–AF103 on the ‘Calc N’ sheet contains one class per column (lines G2–AF3) and 100 lines, that is, one per loss (column D4–D103). Where a given loss intersects with a class, there will be a ﬁgure of 0 (if the loss is not in the category in question) or 1 (if otherwise). By ﬁnding the total of 1s in a column, we will obtain the number of losses per class, or the frequency. Thus, a loss of between −1 600 000 and −1 465 000 has a frequency of 1 % (cell G104) and a loss of between 425 000 and 560 000 has a frequency of 13 % (cell V104). Cells AH2–AJ29 carry the category centres (−1 532 500 for the class −1 600 000 to −1 465 000), and the frequencies as a ﬁgure and a percentage. If we look at AH2 to AI29 in bar chart form, we will obtain the observed distribution for the bond portfolio (Figure 8.3) located in AL2 to AQ19. Now the normal distribution should be calculated. We have calculated the mean and standard deviation for the estimated distribution of the losses in D104 and D105, respectively. We have carried the losses to AS4 to AS103. Next, we have calculated the value of the normal density function (already set out in 1 1 x−µ 2 Section 3.4.2 ‘Continuous model’), that is, f (x) = √ , to each exp − 2 σ 2πσ loss in the bond portfolio (AT4 to AT103). If we plot this data on a graph, we will obtain (Figure 8.4) the graph located from AV2 to BB19. In order to compare these distributions (observed and theoretical), we have superimposed them; the calculations that allow this superimposition are located in the ‘Graph N’ sheet. As can be seen in Figures 8.3 and 8.4, the coordinates are proportional (factor 135 000 for class intervals). We have summarised the following in a table (B2 to D31 of the ‘Graph N’ sheet):

Observed distribution (Bonds Pf.) 14 % 12 % 10 % 8% 6% 4%

0%

–1 532 500 –1 397 500 –1 262 500 –1 127 500 –992 500 –857 500 –722 500 –587 500 –452 500 –317 500 –182 500 –47 500 87 500 222 500 357 500 492 500 627 500 762 500 897 500 1 032 500 1 167 500 1 302 500 1 437 500 1 572 500 1 707 500 1 842 500

2%

Figure 8.3 Observed distribution

254

Asset and Risk Management Normal distribution (Bonds Pf.) 0.0000007 0.0000006 0.0000005 0.0000004 0.0000003 0.0000002

–1 600 000

0.0000001 0 –850 000 –100 000

650 000

1 400 000

Figure 8.4 Normal distribution

Observed and normal distribution (Bonds Pf.) 2.E – 06 Observed dist. Normal dist.

1.E – 06 8.E – 07 6.E – 07 4.E – 07

0.E + 00

–1 532 500 –1 397 500 –1 262 500 –1 127 500 –992 500 –857 500 –722 500 –587 500 –452 500 –317 500 –182 500 –47 500 87 500 222 500 357 500 492 500 627 500 762 500 897 500 1 032 500 1 167 500 1 302 500 1 437 500 1 572 500 1 707 500 1 842 500

2.E – 07

Figure 8.5 Normal and observed distributions

• the class centres; • the observed frequencies relating to them; • the normal coordinates relative to each class centre. It is therefore possible (Figure 8.5) to construct a graph, located in E2 to N32, which is the result of the superimposition of the two types of distribution types. We may observe an underestimation of the frequency through normal law in distribution tails, which further conﬁrms the leptokurtic nature of the ﬁnancial markets.

Part IV From Risk Management to Asset Management

Introduction 9 Portfolio Risk Management 10 Optimising the Global Portfolio via VaR 11 Institutional Management: APT Applied to Investment Funds

256

Asset and Risk Management

Introduction Although risk management methods have been used ﬁrst and foremost to quantify market risks relative to market transactions, these techniques tend to be generalised especially if one wishes to gain a comprehensive understanding of the risks inherent in the management of institutional portfolios (investment funds, hedge funds, pension funds) and private portfolios (private banking and other wealth management methods). In this convergence between asset management on the one hand and risk management on the other, towards what we term the discipline of ‘asset and risk management’, we are arriving, especially in the ﬁeld of individual client portfolio management, at ‘portfolio risk management’, which is the subject of Chapter 9. Next, we will look at methods for optimising asset portfolios that verify normal law hypotheses, which is especially the case with equities.1 In particular, we will be adapting two known portfolio optimisation methods: • Sharpe’s simple index method (see Section 3.2.4) and the EGP method (see Section 3.2.6). • VaR (see Chapter 6); we will be seeing the extent to which VaR improves the optimisation. To close this fourth part, we will see how the APT model described in Section 3.3.2 allows investment funds to be analysed in behavioural terms.

Asset management Fund management Portfolio management

• Asset allocation & market timing • Stock picking • Currency allocation

• Portfolio risk management • Fund risk management

Asset and risk management

• • • • •

Stop loss Credit equivalent VBP VaR MRO

Risk management

Figure P1 Asset and risk management

1

In fact, the statistical distribution of an equity is leptokurtic but becomes normal over a sufﬁciently long period.

9 Portfolio Risk Management1 9.1 GENERAL PRINCIPLES This involves application of the following: • To portfolios managed traditionally, that is, using: — asset allocation with a greater or lesser risk proﬁle (including, implicitly, market timing); — a choice of speciﬁc securities within the category of equities or options (stock picking); — currency allocation. • To particularly high-risk portfolios (said to have a ‘high leverage effect’) falling clearly outside the scope of traditional management (the most frequent case), a ﬁvefold risk management method that allows: — daily monitoring by the client (and intraday monitoring if market conditions require) of the market risks to which he or she is exposed given the composition of his or her portfolio. — monitoring of equal regularity by the banker (or wealth manager where applicable) of the client positions for which he or she is by nature the only person responsible. Paradoxically (at least initially) it is this second point that is essential for the client, since this ability to monitor credit risk with the use of modern and online tools allows the banker to minimise the client’s need to provide collateral, something that earns little or nothing.

9.2 PORTFOLIO RISK MANAGEMENT METHOD Let us take the case of the particularly high-risk portfolios, including derivatives: • linear portfolios (such as FRA, IRS, currency swaps and other forward FX); • nonlinear portfolios (options); that is highly leveraged portfolios. In order to minimise the need for collateral under this type of portfolio wherever possible, the pledging agreement may include clauses that provide for a risk-monitoring framework, which will suppose rights and obligations on the part of the contractual parties: • The banker (wealth manager) reports on the market risks (interest rates, FX, prices etc.) thus helping the client to manage the portfolio. 1

Lopez T., Delimiting portfolio risk, Banque Magazine, No. 605, July–August 1999, pp. 44–6.

258

Asset and Risk Management

• The client undertakes to respect the risk criteria (by complying with the limits) set out in the clauses, authorising the bank (under certain conditions) to act in his name and on his behalf if the limits in question are breached. A portfolio risk management mandate generally consists of two parts: • the investment strategy; • the risk framework. 9.2.1 Investment strategy This part sets out: • • • •

The The The The

portfolio management strategy. responsibilities of each of the parties. maximum maturity dates of the transactions. nature of the transactions.

9.2.2 Risk framework In order to determine the risks and limits associated with the portfolio, the following four limits will be taken into consideration, each of which may not be exceeded. 1. 2. 3. 4.

The The The The

stop loss limit for the portfolio. maximum credit equivalent limit. upper VBP (value of one basis point) limit for the portfolio. upper VaR (Value at Risk) limit for the portfolio.

For each measure, one should be in a position to calculate: • the limit; • the outstanding to be compared to the limit. 9.2.2.1 The portfolio stop loss With regard to the limit, the potential global loss on the portfolio (deﬁned below) can never exceed x % of the cash equivalent of the portfolio, the portfolio being deﬁned as the sum of: • the available cash balances, on one hand; • the realisation value of the assets included in the portfolio, on the other hand. The percentage of the cash equivalent of the portfolio, termed the stop loss, is determined jointly by the bank and the client, depending on the client’s degree of aversion to the risk, based in turn on the degree of leverage within the portfolio. For the outstanding, the total potential loss on the portfolio is the sum of the differences between:

Portfolio Risk Management

259

• the value of its constituent assets at the initiation of each transaction; • the value of those same assets on the valuation date; Each of these must be less than zero for them to apply. Example Imagine a portfolio of EUR100 invested in ﬁve equities ABC at EUR10 per share and ﬁve equities XYZ at EUR5 per share at 1 January. If the value of ABC changes to EUR11 and that of XYZ to EUR4 on the next day, the potential decrease in value on XYZ (loss of EUR1 on 10 equities in XYZ) will be taken into account for determining the potential overall loss on the portfolio. The EUR5 increase in value on the ABC equities (gain of EUR1 on ﬁve equities ABC) will, however, be excluded. The overall loss will therefore be −EUR10. The cash equivalent of the portfolio will total EUR95, that is, the total arising from the sale of all the assets in the portfolio. This produces a stop loss equal to 20 % of the portfolio cash equivalent (20 % of EUR95 or 19). See Table 9.1. 9.2.2.2 Maximum credit equivalent limit The credit limit totals the cash equivalent of the portfolio (deﬁned in the ‘portfolio stop loss’ section). The credit liabilities, which consist of the sum of the credit equivalents deﬁned below, must be equal to or less than the cash equivalent of the portfolio. The credit equivalent calculation consists of producing an equivalent value weighting to base products or their derivatives; these may or may not be linear. The weighting will be a function of the intrinsic risk relative to each product (Figure 9.1) and will therefore depend on whether or not the product: • involves exchange of principal (for example, a spot FX involves an exchange of principal whereas a forward FX deal will defer this to a later date); • involves a contingent obligation (if options are issued); • involves a contingent right (if options are purchased); Table 9.1

Stop loss

Stop loss

Potential loss

Use of limit

−EUR10

52.63 %

EUR19

+

Credit risk Spot Option issues Option purchase Forward Fx

– Figure 9.1 Weight of the credit equivalent

FRA, IRS and currency swaps

260

Asset and Risk Management

• the product price (if no exchange of principal is supposed) is linked to one variable (interest rate for FRA, IRS and currency swaps) or two variables (interest rates and spot in the case of forward FX). We could for example determine credit usage per product as follows: 1. For spot cash payments, 100 % of the nominal of the principal currency. 2. For the sale of options, the notional for the underlying principal currency, multiplied by the forward delta. 3. For the purchase of options, 100 % of the premium paid. 4. For other products, each position opened in the portfolio would be the subject of a daily economic revaluation (mark-to-market). The total potential loss arising would be taken (gains being excluded) and multiplied by a weighting factor (taking account of the volatility of the asset value) equal to 100 % + x % + y % for future exchanges and 100 % + x % for FRA, IRS and currency swaps, x and y always being strictly positive amounts. Example Here is a portfolio consisting of ﬁve assets (Tables 9.2 and 9.3). The revaluation prices are shown in Table 9.4. Table 9.2 FX products Product

P/S

Spot Six-month future

S P

Currency

Nom.

P/S

EUR USD

5 m 10 m

P S

Currency USD JPY

Nom.

Spot

Forward

5.5 million 1170 million

1.1 120

– 117

Table 9.3 FX derivatives and FRA Product

P/S

Currency

Nominal

P S S

EUR/USD GBP/JPY DKK

EUR11 million £5 million 100 million

Three-month call Strike 1.1 Two-month put Strike 195.5 FRA 3–6

Price/premium EUR220 000 GBP122 000 3.3 %

Table 9.4 Revaluation price Product Spot FX forward Long call Short put FRA Total

Historical price

Current price

1.1 117 2.00 % nom. EUR 2.44 % nom. GBP 3.3 %

1.12 114.5 2.10 % nom. EUR 2.48 % nom. GBP 3.4 %

Loss (currency)

Potential loss (EUR)

−100 000 −25 million +11 000 −2000 −25 000

−89 285.71 −189 969.60 +11 000 −3034.90 −3363.38 −274653.59

Portfolio Risk Management

261

Table 9.5 Credit equivalent agreements Product

Credit equivalent

Spot FX forward FX options purchase FX options sale FRA

100 % of nominal of principal currency 110 % of potential loss ( χ 2 ) step by step. The exclusion of variables is conditioned in all cases by the degree of adjustment of the model. The rate of concordance between the model and the observed reality must be maximised. The SAS output will be association of predicted probabilities and observed responses – concordant: 97.9 %. In the following example (Table 12.8), the variable Mc10yr has a probability of 76.59 % of being statistically zero. Excluding it will lead to deterioration in the rate of concordance between the observations (repricing–non-repricing) and the forecasts for the model (repricing–non-repricing). This variable must remain in the model. There are other criteria for measuring the performance of a logistic regression, such as the logarithm of likelihood. The closer the log of likelihood is to zero, the better the adjustment of the model to the observed reality (−2log L in SAS output). The log of likelihood can also be approximated by the MacFadden R 2 : R 2 = 1 (−2log L intercept only/−2log L intercept and covariates).

310

Asset and Risk Management

Table 12.8 Logistic regression Variables

DF

Parameter estimate

Standard error

Wald chi-square

Proba over chi-square

Odds ratio12

Constant Time M3m Ma3m Ma10y Mc3m Mac3m Mac5y Mc10y Mac10y

1 1 1 1 1 1 1 1 1 1

35.468 −0.2669 0.3231 −5.9101 0.9997 0.0335 −0.0731 0.1029 0.0227 −0.1146

12.1283 0.2500 0.1549 3.4407 0.7190 0.0709 0.0447 0.1041 0.0762 0.102

8.5522 1.1404 4.3512 2.9504 1.9333 0.2236 2.6772 0.9766 0.0887 1.2618

0.0035 0.2856 0.0370 0.0859 0.1644 0.6363 0.1018 0.323 0.7659 0.2613

0.766 1.381 0.003 2.718 1.034 0.929 1.108 1.023 0.892

Association of predicted probabilities and observed responses Concordant = 97.9 % Discordant = 2.1 % Tried = 0 % (1692 pairs)

In the model, the probability of a change in rate increases with: • • • • •

time; the fall in the static spread A at 3 months; the rise in the static spread B at 3 months; the fall in the static spread A 10 years; the slowing of the rise in the dynamic spreads A 3 months, B 5 months and A 10 years; • the rise in the dynamic margins B 3 months and B 10 years. Displaying the model For each model, the linear combination on the historical data must be programmed. This will allow the critical value of the model needed for dissociating the repricing periods from the periods of equilibrium to be determined. As the dissociation is not 100 %, there is no objective value. The critical value chosen conditions the statistical error of the ﬁrst and second area. In the example, the value 1.11 allows almost all the repricing to be obtained without much anticipation of the model for the actual repricing periods (see model CD-ROM and critical value). The method presented was applied to all the ﬂoating-rate products for a bank every two months for nine years maximum in the period 1991 to 1999, depending on the historical data available and the creation date of the products. The results are encouraging as the rates of convergence between the models and the observed reality, with just a few exceptions, are all over 90 %. The classic method, based on the choice of dynamic and static spreads through simple statistical correlation, has been tested. This method shows results very far removed from those obtained using the method proposed, as the rate of concordance of pairs was less than 80 %. 12 The odds ratio is equal to the exponential of the parameter estimated: eb . A variation in a unit within the variable (here time and the spreads) makes the probability of ‘repricing’ alter by 1 − eb .

Techniques for Measuring Structural Risks in Balance Sheets

311

12.4.3.3 Use of the models in rate risk management This behavioural study allows the arbitrary rate-change conventions to be replaced to good advantage. Remember that the conventions in the interest-rate gaps often take the form of a simple calculation of an average for the periods during which rates are not changed. Working on the hypothesis that the bank’s behaviour is stable, we can use each model as a prospective by calculating the static and dynamic spreads on the basis of the sliding forward rates, for example over one year. This ﬂoating-rate integration method gives us two cases: • The rate change occurs between today’s date and one year from now. In this case, the contract revision date will be precisely on that date. • The rate change is not probable over a one-year horizon. In this case, the date of revision may be put back to the most distant prospective date (in our example, in one year). Naturally, using an interest-rate gap suggests in the ﬁrst instance that the rate-change dates are known for each contract, but also that the magnitude of the change can be anticipated in order to assess the change in the interest margin. Our method satisﬁes the ﬁrst condition but does not directly give us the magnitude of the change. In fact, between two repricing periods we see a large number of situations of equilibrium. In practice, the ALM manager can put this free space to good use to optimise the magnitude of the change and proﬁt from a long or short balance-sheet position. This optimisation process is made easier by the model. In fact, a change with too low a magnitude will necessitate a further change, while a change with too high a magnitude may be incompatible with the historical values of the model (see the statistics for magnitude of changes). Modelling the repricing improves knowledge of the rate risk and optimises the simulations on the interest margin forecasts and the knowledge of the market risk through VaR. 12.4.3.4 Remarks and criticisms Our behavioural approach does, however, have a few weak points. The model speciﬁes the revision dates without indicating the total change in terms of basis points. It is not a margin optimisation model. Another criticism that can be levelled relates to the homogeneity of the period studied. A major change in one or more of the parameters set out previously could disrupt or invalidate the model estimated. Finally, this empirical method cannot be applied to new ﬂoating-rate products. Despite these limitations, the behavioural approach to static and dynamic spreads, based on the analysis of canonical correlations, gives good results and is sufﬁciently ﬂexible to explain changes in rates on very different products. In fact, in our bank’s balance sheet, we have both liability and asset products each with their own speciﬁc client segmentation. The behavioural method allows complex parameters to be integrated, such as the business policy of banks, the sensitivity of adjustment of volumes to market interest rates, and competition environment.

12.5 REPLICATING PORTFOLIOS In asset and liability management, a measurement of the monthly VaR for all the assets as a whole is information of ﬁrst importance on the market risk (rate and change). It is a measurement that allows the economic forecasts associated with the risk to be assessed.

312

Asset and Risk Management

ALM software packages most frequently use J. P. Morgan’s interest and exchange rate variance-covariance matrix, as the information on duration necessary for making the calculation is already available. It is well known that products without a maturity date are a real stumbling block for this type of VaR and for ALM. There is relatively little academic work on the studies that involve attributing maturity dates to demand credit or debit products. The aim of ‘replicating portfolios’ is to attribute a maturity date to balance-sheet products that do not have one. These portfolios combine all the statistical or conventional techniques that allow the position of a product without a maturity date to be converted into an interwoven whole of contracts that are homogeneous in terms of liquidity and duration. ‘Replicating portfolios’ can be constructed in different ways. If the technical environment allows, it is possible to construct them contract by contract, deﬁning development proﬁles and therefore implicit maturity dates for ‘stable’ contracts. Where necessary, on the basis of volumes per type of product, the optimal value method may be used. Other banks provide too arbitrary deﬁnitions of replicating portfolios. 12.5.1 Presentation of replicating portfolios Many products do not have a certain maturity date, including, among others, the following cases: • American options that can be exercised at any time outside the scope of the balance sheet. • Demand advances and overcharges on assets. • Current liability accounts. The banks construct replicating portfolios in order to deal with this problem. This kind of portfolio uses statistical techniques or conventions. The assigned aim of all the methods is to transform an accounting balance of demand products into a number of contracts with differing characteristics (maturity, origin, depreciation proﬁle, internal transfer rate etc.). At the time of the analysis, the accounting balance of the whole contract portfolio is equal to the accounting balance of the demand product. Figures 12.2–12.4 offers a better understanding of replicating portfolio construction. The replicating portfolio presented consists of three different contracts that explain the accounting balances at t−1 , t0 and t1 . The aim of the replicating portfolio is to represent the structure of the ﬂows that make up the accounting balance. Accounting balances Thousands of millions

100 80 60 90

80

t0 Periods

t1

40 60 20 0

t–1

Figure 12.2 Accounting balances on current accounts

Thousands of millions

Techniques for Measuring Structural Risks in Balance Sheets Contract 1

50

Contract 2

25 20

40

30

15

30

40

40

40

20

10

10

5

0

0

t–1

t0

Contract 3

50

40

313

t1

20

20 0 t–1

10

10 t0

0

t1

30

30

t0

t1

40

40

t2

t3

20 t–1

Figure 12.3 Contracts making up the replicating portfolio

Contract 2 Contract 3 Contract 1 60 IN t–1

90 IN t0

80 IN t1

Figure 12.4 Replicating portfolio constructed on the basis of the three contracts

12.5.2 Replicating portfolios constructed according to convention To present the various methods, we are taking the example of current accounts. There are two types of convention for constructing a replicating portfolio. The ﬁrst type can be described as simplistic; they are used especially for demand deposits with an apparently stable monthly balance. On the basis of this observation, some banks construct the replicating portfolio by applying linear depreciation to the accounting balance at moment t over several months. As the depreciation is linear over several months or even several years, the banking institutions consider that the structure of the ﬂows making up the accounting balance is stable overall in the short term. In fact, only 1/12 of the balance is depreciated at the end of one month (1/6 in the second month, etc.) in a replicating portfolio constructed over 12 months. This arbitrary technique, which has no statistical basis, is unsatisfactory as many current accounts are partially or totally depreciated over one month because of the monthly nature of the income. The second class of conventions covers the conventions that are considered as more sophisticated and these do call in part on statistical studies. Because of the very restrictive hypotheses retained, construction of the replicating portfolio remains within the scope of convention. For example: we calculate two well-known statistical indicators to assess a volatile item like the arithmetical mean and the monthly standard deviation for the daily balances of all the deposits. The operation is repeated every two months, every quarter etc. in order to obtain the statistical volatility indicators (average, standard deviation) on a temporal horizon that increases from month to month. The interest, of course, is in making the calculation over several years in order to reﬁne support for stable resources for long-term functions such as credit facilities. Thanks to these indicators it is possible, using probability theory, to calculate the monthly portion of the deposits that will be depreciated month by month. For example:

314

Asset and Risk Management

to deﬁne the unstable portion of deposits for one month, we calculate ﬁrst the probability that the current account will be in debit balance compared to the monthly average and the standard deviation for the totals observed over the month. The probability obtained is equal to the unstable proportion for one month. We can also write in the general case that the probability associated with the percentage of deposits depreciated equals Pr[x < 0] with σ as the standard deviation over a period (one or two months etc.) for the daily totals of deposits and µ as the arithmetical mean for the deposits over the same period. With this method, part of the deposits is depreciated or deducted each month until depreciation is complete. In other words, the balance sheet is deﬂated. For example: the demand deposit entry in the balance sheet represents EUR10 000 million, and this sum will be broken down into monthly due dates that generally cover several years. Naturally, this convention for constructing a replicating portfolio is more satisfying than a simple arbitrary convention. Some serious weaknesses have, however, been noted. In fact, if we have a product with a credit balance, the proportion depreciated during the ﬁrst month will be the probability of the balance becoming a debit balance in view of the monthly arithmetical mean and standard deviation calculated and observed. Under this approach, the instability amounts to the probability of having a debit balance (for a product in liabilities) or a credit balance (for a product in assets). It is considered that the credit positions capable of being debited to a considerable extent are probably stable! This shows the limits of the approach built on the global basis balance or the practice of producing the total accounting position day by day. 12.5.3 The contract-by-contract replicating portfolio The other methods consist of producing more accurate projections for demand products on the basis of statistical analyses. The ﬁrst prerequisite for a statistical analysis to be consistent is to identify correctly each component that explains the overall development. In other words, the statistical analysis builds up the replicating portfolio account by account and day by day. The portfolio is not built on the daily accounting balance that brings together the behaviour of all the accounts. The banks allocate one account per type of product and per client. The account-by-account analysis is more reﬁned as it allows the behaviour of the ﬂows to be identiﬁed per type of client. The account-by-account daily analysis includes technical problems of database constitution, including those in the large system or ‘mainframe’ environment, because of the volume created by the large number of current accounts or cheques and the need for historical entries. After the completion of this ﬁrst stage, considerable thought was applied to deﬁne the concept of stability in theoretical terms. To carry out this work, we used two concepts: • The ﬁrst was the method of the account-by-account replicating portfolio. We considered that the balance observed at moment t is the product of a whole set of interwoven accounts with different proﬁles and cashﬂow behaviour and nonsimultaneous creation dates. • The second concept is the stability test, adopted for deﬁning a stable account statistically. The test used is the standardised range or SR. This is a practical test used to judge the normality of a statistical distribution, as it is easy to interpret and calculate. SR is a measurement of the extent of the extreme values in the observations for a sample

Techniques for Measuring Structural Risks in Balance Sheets

315

dispersion unit (the standard deviation13 ). It is expressed as follows: SR =

max(Xi ) − min(Xi ) σX

This test allows three types of statistical distribution to be identiﬁed: a normal or Gaussian distribution, a ﬂat distribution with higher statistical dispersion than that of a normal law, and a distribution with a statistical dispersion lower than that of a normal law. It can be considered that a demand current account is stable within the third typology. The difference between the extreme values, max(Xi ) − min(Xi ), is low because of the standard different. The SR statistical test can be carried out with several intervals of conﬁdence, and the test can be programmed with differentiated intervals of conﬁdence. It is preferable to use a wide interval of conﬁdence to judge the daily stability of the account in order to avoid the problem of making monthly income payments. In addition, the second condition for daily account stability is the absence of debit balances in the monthly historical period values. In a monthly historical period, it is preferable to take a wider interval of conﬁdence when the history of the deposits shows at least one debit balance, and a narrower interval otherwise. After the stable accounts have been identiﬁed, we can reasonably create repayment schedules by extending the trends or historical tendencies. On the statistically stable accounts, two major trend types exist. In the upward trend, the deposits are stable over a long term and the total observed at moment t will therefore be depreciated once over a long period; this may be the date of the historical basis. In the downward trend, it is possible by prolonging the trend to ﬁnd out the future date of complete depreciation of the account. Therefore, the balance of the account at moment t is depreciated linearly until the maturity date obtained by prolonging the trend. In order to provide an explanation, we have synthesised the conditions of stability in Table 12.9. We have identiﬁed four cases. ‘SR max’ corresponds to a wide interval of conﬁdence, while ‘SR min’ corresponds to a narrower interval of conﬁdence. Table 12.9 Stability typologies on current account deposits Type of case

Daily stability

Monthly stability

Historical monthly balances

Type of trend

Maturity date

1

Yes (SR max)

Yes (SR min)

2

Yes (SR max)

Yes (SR min)

Upward & horizontal Downward

3

Yes (SR max)

Yes (SR max)

4

Yes (SR max)

No (SR min)

Always in credit Always in credit At least one debit balance Always in credit

Duration of history of data Duration of trend prolongation Duration of history of data Duration of history of data (for historical min. total)

Generally upward No trend

13 There are of course other statistical tests for measuring the normality of a statistical distribution, such as the χ 2 test, the Kolmogorov–Smirnov test for samples with over 2000 contracts, and the Wilk–Shapiro test where needed.

316

Asset and Risk Management

The fourth case requires further explanation. These accounts are always in a credit balance on the daily and monthly histories, but are not stable on a monthly basis. On the other hand, there is a historical minimum credit balance that can be considered to be stable. Economists name this liquidity as ‘liquidity preference’. In this case, the minimum historical total will be found in the long-term repayment schedule (the database date). The unstable contracts, or the unstable part of a contract, will have a short-term maturity date (1 day to 1 month). This method will allow better integration of products into the liquidity management tools and rate risk without maturity dates. Based on the SR test and the account-by-account replicating portfolio, it is simple in design and easy to carry out technically. Speciﬁcally, an accounting position of 120 will be broken down as follows. The unstable part will have a maturity date of one day or one month, and the stable part will be broken down two months from the date of the historical period. If the history and therefore the synthetic maturity dates are judged insufﬁcient, especially on savings products without maturity dates, it is possible under certain hypotheses to extrapolate the stability level and deﬁne a long maturity period over smaller totals. The historical period is 12 months. A volume of 100 out of the 130 observed is deﬁned as stable. The maturity period is therefore one year. It is also known that the volatility of a ﬁnancial variable calculated over a year can be used as a basis for extrapolating the volatility calculated over √ two years by multiplying the standard deviation by the time root: σ2 years = σ1 year · 2. It can be considered that the stable part diminishes symmetrically in proportion to time. √ The stable part at ﬁve years can thus be deﬁned: 100 · 1/ 5 = 100 · 0.447 = 44.7 %. We therefore have 30 at one day, 55.27 at one year and 44.73 at ﬁve years. The stability obtained on the basis of a monthly and daily history therefore takes overall account of the explanatory variables of instability (arbitrage behaviour, monthly payment of income, liquidity preference, anticipation of rates, seasonality etc.). In this method, the interest rate is an exogenous variable. The link between changes in stability and interest rates therefore depends on the frequency of the stability analysis. It allows speciﬁc implicit maturity dates to be found while remaining a powerful tool for allocating resources on a product without a maturity date located among the assets. For a liability bank, a good knowledge of ﬂows will allow resources to be replaced over the long term instead of the interbank system and therefore provide an additional margin if the rate curve is positive. For an asset bank, this procedure will allow better management of the liquidity risk and the rate risk. Contrarily, this historical and behavioural approach to the replicating portfolios poses problems when rate simulations are carried out in ALM. In the absence of an endogenous rate variable, knowledge of the link between rate and replicating portfolio will be limited to history. This last point justiﬁes the replicating portfolio searches that include interest rates in the modelling process. 12.5.4 Replicating portfolios with the optimal value method 12.5.4.1 Presentation of the method This method was developed by Smithson14 in 1990 according to the ‘building approach’ or ‘Lego approach’. The method proposes a deﬁnition of optimal replicating portfolios 14 Smithson C., A Lego approach to ﬁnancial engineering. In The Handbook of Currency and Interest Rate Risk Management, edited by R. Schwarz and C. W. Smith Jr., New York Institute of Finance, 1990 or Damel P., “L’apport de replicating portfolio ou portefeuille r´epliqu´e en ALM: m´ethode contrat par contrat ou par la valeur optimale”, Banque et March´es, mars avril, 2001.

Techniques for Measuring Structural Risks in Balance Sheets

317

by integrating market interest rates and the anticipated repayment risk, and considers the interest rate(s) to be endogenous variables. This perspective is much more limited than the previous one when the bank carries out stochastic or other rate simulations on the ALM indicators (VaR, NPV for equity funds, interest margins etc.). In this method, it is considered that the stable part of a product without a maturity date is a function of simple rate contracts with known maturity dates. In this problem, the deﬁnition of stability is not provided contract by contract but on the basis of daily or monthly accounting volumes. An equation allows optimal representation of the chronological series of the accounting positions. This ﬁrst point deﬁnes a stable part and a volatile part that is the statistical residue of the stability equation. The volatile part is represented by a short-term bond with a short-term monetary reference rate (such as one month). The stable part consists of a number of interwoven zero-coupon bonds with reference rates and maturity dates from 3 months to 15 years. The weave deﬁnes a reﬁnancing strategy based on the monetary market and the primary bond market. The stable part consists of rate products. The advantage of this approach is therefore that the early repayment rate is taken into account together with any ‘repricing’ of the product and the volume is therefore linked to the reference interest rates. The model contains two principal equations. • Volum t represents the accounting position at moment t. • Stab t represents the stable part of the volume at moment t. • rrt is the rate for the product at moment t and taux1m, taux2m etc. represent the market reference rates for maturity rates 1 month, 2 months etc. • εt represents the statistical residual or volatile part of the accounting positions. • brit represents an interest for a zero-coupon bond position with maturity date i and market reference rate i at time t • αi represents the stable part replicated by the brit position. • αi equals 1 (i = 3 months to 15 years). • mrt represents the portion of the demand product rate that is not a function of the market rate. mrt is also equal to the difference between the average weighted rate obtained from the interwoven bonds and the ﬂoating or ﬁxed retail rate. This last point also includes the repricing strategy and the spread, which will be negative on liability products and positive on asset products. Wilson15 was the ﬁrst to use this approach speciﬁcally for optimal value. His equations can be presented as follows: Volum t = Stab t + εt

(a)

15 years

Volum t · rrt = εt · r1

month,t

+

αi brit + mrt + δt

i=3 months

with the constraint: 15

15

years i=3 months αi

= 1.

Wilson T., Optimal value: portfolio theory, Balance Sheet, Vol. 3, No. 3, Autumn 1994.

(b)

318

Asset and Risk Management

Example of replicated zero-coupon position br6m is a bond with a six-month maturity date and a market reference rate of six months. It will be considered that the stable part in t1 is invested in a six-month bond at a six-month market rate. At t2 , t3 , t4 , t5 and t6 the new deposits (difference between Stab t−1 and Stab t ) are also placed in a six-month bond with a six-month reference market rate for t2 , t3 , t4 , t5 and t6 . At t7 the stable part invested at t1 has matured. This stable party and the new deposits will be replaced at six months at the six-month market rate prevailing at t = 7. brit functions with all the reference rates from three months to 15 years. After econometric adjustment of this two-equation model, αi readily gives us the duration of this demand product. The addition properties of the duration are used. If α1y = 0.5 and α2y = 0.5, the duration of this product without a maturity date will be 18 months. 12.5.4.2 Econometric adjustment of equations A. The stability or deﬁnition equation There are many different forecasting models for the chronological series. For upward accounting volumes, the equation will be different from that obtained from decreasing or sine wave accounting values. The equation to be adopted will be the one that minimises the term of error ε. Here follows a list (not comprehensive) of the various techniques for forecasting a chronological series: • • • •

regression; trend extrapolations; exponential smoothing; autoregressive moving average (ARMA).

Wilson uses exponential smoothing. The stability of the volumes is an exponential function of time, Stab t = b0 · eb1 t + εt or log Stab t = log b0 + b1 · t + δt Instead of this arbitrary formula, we propose to deﬁne the volumes on the basis of classical methods or recent research into chance market models specialised in during the temporal series study. These models are much better adapted for estimating temporal series. The ARMA model is a classical model; it considers that the volumes observed are produced by a random stable process, that is, the statistical properties do not change over the course of time. The variables in the process (that is, mathematical anticipation, valuation–valuation) are independent of time and follow a Gaussian distribution. The variation must also be ﬁnished. Volumes will be observed at equidistant moments (case of process in discrete time). We will take as an example the ﬂoating-demand savings accounts in LUF/BEF

Techniques for Measuring Structural Risks in Balance Sheets

319

from 1996 to 1999, observed monthly (data on CD-ROM). The form given in the model is that of the recurrence system, Volum t = a0 +

p

ai Volum t−i + εt

i=1

where a0 + a1 Volum t−1 + . . . + aP Volum t−p represents the autoregressive model that is ideal or perfectly adjusted to the chronological series, thus being devoid of uncertainty, and εt is a mobile average process. εt =

q

bi ut−i

i=0

The ut−I values constitute ‘white noise’ (following the non-autocorrelated and centred normal random variables with average 0 and standard deviation equal to 1). εt is therefore a centred random variable with constant variance. This type of model is an ARMA type model (p, q). Optimisation of ARMA model (p, q) The ﬁrst stage consists of constructing the model on the observed data without transformation (Volum t ). The ﬁrst solution is to test several ARMA models (p, 1) and to select the model that maximises the usual adjustment criteria: • The function of log of likelihood. Box and Jenkins propose the lowest square estimators (R-square in the example or adjusted), identical to the maximum likelihood estimators if it is considered that the random variables are distributed normally. This last point is consistent with the ARMA approach. • AIC (Akaike’s information criterion). • Schwartz criteria. • There are other criteria, not referenced in the example (FPE: ﬁnal prediction error; BIC: Bayesian information criterion; Parsen CAT: criterion of autoregressive transfer function). The other process consists of constructing the model on the basis of the graphic autocorrelation test. This stage of identiﬁcation takes account of the autocorrelation test with all the possible intervals (t − n). This autocorrelation function must be downward or depreciated oscillating. In the example, the graph shows on the basis of the bilateral Student test (t = 1.96) that the one- and two-period intervals have an autocorrelation signiﬁcantly different from 0 at the conﬁdence threshold of 5 %. The ARMA model will have an AR component equal to two (AR(2)). This stage may be completed in a similar way by partial autocorrelation, which takes account of the effects of the intermediate values between Volum t and Volum t+r in the autocorrelation. The model to be tested is ARMA (2, 0). The random disturbances in the model must not be autocorrelated. Where applicable, the autocorrelations have not been included in the AR part. There are different tests, including the Durbin–Watson

320

Asset and Risk Management

Table 12.10 ARMA (2, 2) model R-square = 0.7251 Akaike Information Criteria − AIC(K) = 43.539 Schwartz Criteria − SC(K) = 43.777 Parameter estimates AR(1) 0.35356 AR(2) 0.40966 MA(1) 0.2135E-3 MA(2) − 0.91454 Constant 0.90774E + 10 Residuals Skewness 1.44 Kurtosis 7.51 Studentised range 5.33

Adjusted R-square = 0.6773 STD error T-STAT 0.1951 1.812 0.2127 1.926 0.1078 0.0019 0.05865 −15.59 0.7884E + 10 1.151

non-autocorrelation error test. In the example of the savings accounts, the optimal ARMA model with a distribution normal and noncorrelated residue is the ARMA (2, 2) model with an acceptable R 2 of 0.67. This model is therefore stationary, as the AR total is less than 1. The ARMA (2, 2) model (Table 12.10) obtained is as follows. The monthly accounting data, the zero-coupon rates for 1 month, 6 months, 1 year, 2 years, 4 years, 7 years and 10 years can be found on the CD-ROM. The model presented has been calculated on the basis of data from end November 1996 to end February 1999. If the model is nonstationary (nonstationary variance and/or mean), it can be converted into a stationary model by using the integration of order r after the logarithmic transformation : if y is the transformed variable, apply the technique to ((. . . (yt ))) − r times− instead of yt ((yt ) = yt − yt−1 ). We therefore use an ARIMA(p, r, q) procedure.16 If this procedure fails because of nonconstant volatility in the error term, it will be necessary to use the ARCH-GARCH or EGARCH models (Appendix 7). B. The equation on the replicated positions This equation may be estimated by a statistical model (such as SAS/OR procedure PROC NPL), using multiple regression with the constraints

15 years

αi = 1 and αi ≥ 0

i=3 months

It is also possible to estimate the replicated positions (b) with the single constraint (by using the SAS/STAT procedure)

15 years

αi = 1

i=3 months

In both cases, the duration of the demand product is a weighted average of the durations. In the second case, it is possible to obtain negative αi values. We therefore have a synthetic investment loan position on which the duration is calculated. 16

Autoregressive integrated moving average.

Techniques for Measuring Structural Risks in Balance Sheets

321

Table 12.11 Multiple regression model obtained on BEF/LUF savings accounts on the basis of a SAS/STAT procedure (adjusted R-square 0.9431) Variables Intercept (global margin) F1M (stable part) F6M (stable rollover) F1Y (stable rollover) F2Y (stable rollover) F4Y (stable rollover) F7Y (stable rollover) F10Y (stable rollover)

Parameter estimate −92 843 024 0.086084 −0.015703 0.036787 0.127688 3.490592 −4.524331 1.884966

Standard error 224 898 959 0.00583247 0.05014466 0.07878570 0.14488236 1.46300205 2.94918687 1.63778119

Prob > (T ) 0.6839 0.0001 0.7573 0.6454 0.3881 0.0265 0.1399 0.2627

If α1y = 2.6 and α6m = −1.6 for a liability product, duration = 1(1.6/2.6)0.5 = 0.69 of a year. The bond weaves on the stable part have been calculated on the basis of the zero-coupon rates (1 month, 6 months, 1 year, 2 years, 4 years, 7 years, 10 years). See Table 12.11. The equation (b) is very well adjusted, as R 2 is 94.31 %. The interest margin is of course negative, as the cost of the resources on liabilities is lower than the market conditions. Like Wilson, we consider that the margin between the average rate for the interwoven bonds and the product rate is constant over the period. Possibly it should also be considered that the margin is not constant, as the ﬂoating rate is not instantaneously re-updated according to changes in market rates. On the other hand, the quality of the clients and therefore the spread of credit are not necessarily constant over the period. The sum of coefﬁcients associated with the interwoven bond positions is 1. This multiple linear regression allows us to calculate the duration of this product without a maturity date on the basis of the synthetic bond positions obtained. In the example, the duration obtained from the unstable and stable positions equals 1.42 years.

Appendices

1 2 3 4 5 6 7 8

Mathematical concepts Probabilistic concepts Statistical concepts Extreme value theory Canonical correlations Algebraic presentation of logistic regression Time series models: ARCH-GARCH and EGARCH Numerical methods for solving nonlinear equations

Appendix 1 Mathematical Concepts1 1.1 FUNCTIONS OF ONE VARIABLE 1.1.1 Derivatives 1.1.1.1 Deﬁnition f (x0 + h)−f (x0 ) The derivative2 of function f at point x0 is deﬁned as f (x0 ) = limh→0 , h if this limit exists and is ﬁnite. If the function f is derivable at every point within an open interval ]a; b[, it will constitute a new function deﬁned within that interval: the derivative function, termed f . 1.1.1.2 Geometric interpretations For a small value of h, the numerator in the deﬁnition represents the increase (or decrease) in the value of the function when the variable x passes from value x0 to the neighbouring value (x0 + h), that is, the length of AB (see Figure A1.1). The denominator in the same expression, h, is in turn equal to the length of AC. The ratio is therefore equal to the slope of the straight line BC. When h tends towards 0, this straight line BC moves towards the tangent on the function graph, at point C. The geometric interpretation of the derivative is therefore as follows: f (x0 ) represents the slope of the tangent on the graph for f at point x0 . In particular, the sign of the derivative characterises the type of variation of the function: a positive (resp. negative) derivative has a corresponding increasing (resp. decreasing) function. The derivative therefore measures the speed at which the function increases (resp. decreases) in the neighbourhood of a point. The derivative of the derivative, termed the second derivative and written f , will therefore be positive when the function f is increasing, that is, when the slope of the tangent on the graph for f increases when the variable x increases: the function is said to be convex. Conversely, a function with a negative second derivative is said to be concave (see Figure A1.2). 1.1.1.3 Calculations Finally, remember the elementary rules for calculating derivatives. Those relative to operations between functions ﬁrst of all: (f + g) = f + g (λf ) = λf 1 Readers wishing to ﬁnd out more about these concepts should read: Bair J., Math´ematiques g´en´erales, De Boeck, 1990. Esch L., Math´ematique pour e´ conomistes et gestionnaires, De Boeck, 1992. Guerrien B., Alg`ebre lin´eaire pour e´ conomistes, Economica, 1992. Ortega M., Matrix Theory, Plenum, 1987. Weber J. E., Mathematical Analysis (Business and Economic Applications), Harper and Row, 1982. 2 Also referred to as ﬁrst derivative.

326

Asset and Risk Management f(x) f(x0 + h)

B

C

θ

f(x0)

A

x0 + h

x0

x

Figure A1.1 Geometric interpretation of derivative f (x)

f(x)

x

x

Figure A1.2 Convex and concave functions

(fg) = f g + fg f g − fg f = g g2 Next, those relating to compound functions: [g(f )] = g (f ) · f Finally, the formulae that give the derivatives for a few elementary functions: (x m ) = mx m−1 (ex ) = ex (a x ) = a x ln a 1 (ln x) = x 1 (loga x) = x ln a 1.1.1.4 Extrema The point x0 is a local maximum (resp. minimum) of the function f if f (x0 ) ≥ f (x) for any x close to x0 .

(resp. f (x0 ) ≤ f (x))

Mathematical Concepts

327

The extrema within an open interval for a derivable function can be determined thanks to two conditions. • The ﬁrst-order (necessary) condition states that if x0 is an extremum of f , then f (x0 ) = 0. At this point, called the stationary point, the tangent at the graph of f is therefore horizontal. • The second-order (sufﬁcient) condition allows the stationary points to be ‘sorted’ according to their nature. If x0 is a stationary point of f and f (x0 ) > 0, we then have a minimum; in the opposite situation, if f (x0 ) < 0, we have a maximum. 1.1.2 Taylor’s formula Consider a function f that one wishes to study in the neighbourhood of x0 (let us say, at x0 + h). One method will be to replace this function by a polynomial – a function that is easily handled – of the variable h: f (x0 + h) = a0 + a1 h + a2 h2 + · · · For the function f to be represented through the polynomial, both must: • • • •

take the same value at h = 0; have the same slope (that is, the same ﬁrst derivative) at h = 0; have the same convexity or concavity (that is, the same second derivative) at h = 0; and so on.

Also, the number of conditions to be imposed must correspond to the number of coefﬁcients to be determined within the polynomial. It will be evident that these conditions lead to: f (x0 ) 0! f (x0 ) a1 = f (x0 ) = 1! f (x0 ) f (x0 ) = a2 = 2 2! ··· f (k) (x0 ) ak = k! ···

a0 = f (x0 ) =

Generally, therefore, we can write: f (x0 + h) =

f (x0 ) f (x0 ) f (n) (x0 ) n f (x0 ) 2 + h+ h +··· + h + Rn 0! 1! 2! n!

Here Rn , known as the expansion remainder, is the difference between the function f to be studied and the approximation polynomial. This remainder will be negligible under certain conditions of regularity as when h tends towards 0, it will tend towards 0 more quickly than hn .

328

Asset and Risk Management

The use of Taylor’s formula in this book does not need a high-degree polynomial, and we will therefore write more simply: f (x0 + h) ≈ f (x0 ) +

f (x0 ) f (x0 ) 2 f (x0 ) 3 h+ h + h +··· 1! 2! 3!

For some elementary functions, Taylor’s expansion takes a speciﬁc form that is worth remembering: x x2 x3 + + + ··· 1! 2! 3! m(m − 1) 2 m(m − 1)(m − 2) 3 m x + x + ··· (1 + x)m ≈ 1 + x + 1! 2! 3! x2 x3 ln(1 + x) ≈ x − + − ··· 2 3 ex ≈ 1 +

A speciﬁc case of power function expansion is the Newton binomial formula: (a + b)n =

n n k=0

k

a k bn−k

1.1.3 Geometric series If within the Taylor formula for (1 + x)m , x is replaced by (−x) and m by (−1), we will obtain: 1 ≈ 1 + x + x2 + x3 + · · · 1−x It is easy to demonstrate that when |x| < 1, the sequence 1 1+x 1 + x + x2 ··· 1 + x + x2 + · · · + xn ··· will converge towards the number 1/(1 − x). The limit of this sequence is therefore a sum comprising an inﬁnite number of terms and termed a series. What we are concerned with here is the geometric series: 1 + x + x2 + · · · + xn + · · · =

∞

xn =

n=0

1 1−x

A relation linked to this geometric series is the one that gives the sum of the terms in a geometric progression: the sequence t1 , t2 , t3 etc. is characterised by the relation tk = tk−1 · q (k = 2, 3, . . .)

Mathematical Concepts

329

the sum of t1 + t2 + t3 + · · · + tn is given by the relation: n

tk =

k=1

t1 − tn+1 1 − qn = t1 1−q 1−q

1.2 FUNCTIONS OF SEVERAL VARIABLES 1.2.1 Partial derivatives 1.2.1.1 Deﬁnition and graphical interpretation For a function f of n variables x1 , x2 , . . . , xn , the concept of derivative is deﬁned in a similar way, although the increase h can relate to any of the variables. We will therefore have n concepts of derivatives, relative to each of the n variables, and they will be termed partial derivatives. The partial derivative of f (x1 , x2 , . . . , xn ) with respect to xk at point (x1(0) , x2(0) , . . . , xn(0) ) will be deﬁned as: fxk (x1(0) , x2(0) , . . . , xn(0) ) f (x1(0) , x2(0) , . . . , xk(0) + h, . . . , xn(0) ) − f (x1(0) , x2(0) , . . . , xk(0) , . . . , xn(0) ) h→0 h

= lim

The geometric interpretation of the partial derivatives can only be envisaged for the functions of two variables as the graph for such a function will enter the ﬁeld of three dimensions (one dimension for each of the two variables and the third, the ordinate, for the values of the function). We will thus be examining the partial derivatives: f (x0 + h, y0 ) − f (x0 , y0 ) h f (x0 , y0 + h) − f (x0 , y0 ) fy (x0 , y0 ) = lim h→0 h

fx (x0 , y0 ) = lim

h→0

Let us now look at the graph for this function f (x, y). It is a three-dimensional space (see Figure A1.3). Let us also consider the vertical plane that passes through the point (x0 , y0 ) and parallel to the Ox axis. Its intersection with the graph for f is the curve Cx . The same reasoning as that adopted for the functions of one variable shows that the partial derivative fx (x0 , y0 ) is equal to the slope of the tangent to that curve Cx at the axis point (x0 , y0 ) (that is, the slope of the graph for f in the direction of x). In the same way, fy (x0 , y0 ) represents the slope of the tangent to Cy at the axis point (x0 , y0 ). 1.2.1.2 Extrema without constraint The point (x1(0) , . . . , xn(0) ) is a local maximum (resp. minimum) of the function f if f (x1(0) , . . . , xn(0) ) ≥ f (x1 , . . . , xn ) [resp. f (x1(0) , . . . , xn(0) ) ≤ f (x1 , . . . , xn )] for any (x1 , . . . , xn ) close to (x1(0) , . . . , xn(0) ).

330

Asset and Risk Management f(x,y)

Cx Cy

x0

y0 y

x (x0,y0)

Figure A1.3 Geometric interpretation of partial derivatives

As for the functions of a single variable, the extrema of a derivable function can be determined thanks to two conditions. • The ﬁrst-order (necessary) condition states that if x (0) is an extremum of f , then all the partial derivatives of f will be zero in x (0) : fxi (x (0) ) = 0

(i = 1, . . . , n)

When referring to the geometric interpretation of the partial derivatives of a function of two variables, at this type of point (x0 , y0 ), called the stationary point, the tangents to the curves Cx and Cy are therefore horizontal. • The second-order (sufﬁcient) condition allows the stationary points to be ‘sorted’ according to their nature, but ﬁrst and foremost requires deﬁnition of the Hessian matrix of the function f at point x, made up of second partial derivatives of f :

fx1 x1 (x)

fx x (x) 2 1 H (f (x1 , . . . , xn )) = .. . fxn x1 (x)

fx1 x2 (x)

···

fx2 x2 (x) .. . fxn x2 (x)

··· ···

fx1 xn (x)

fx2 xn (x) .. . fxn xn (x)

If x (0) is a stationary point of f and H (f (x)) is p.d. at x (0) or s.p. in a neighbourhood of x (0) , we have a minimum. In the opposite situation, if H (f (x)) is n.d. at x (0) or s.n. in a neighbourhood of x (0) , we have a maximum.3 1.2.1.3 Extrema under constraint(s) This is a similar concept, but one in which the analysis of the problem of extrema is restricted to those x values that obey one or more constraints. 3

These notions are explained in Section 1.3.2.1 in this Appendix.

Mathematical Concepts

331

The point (x1(0) , . . . , xn(0) ) is a local maximum (resp. minimum) of the function f under the constraints g1 (x) = 0 ... gr (x) = 0 If x (0) veriﬁes the constraints itself and f (x1(0) , . . . , xn(0) ) ≥ f (x1 , . . . , xn )

[resp. f (x1(0) , . . . , xn(0) ) ≤ f (x1 , . . . , xn )]

for any (x1 , . . . , xn )

in a neighbourhood of (x1(0) , · · · , xn(0) ) satisfying the r constraints

Solving this problem involves considering the Lagrangian function of the problem. We are looking at a function of the (n + r) variables (x1 , . . . , xn ; m1 , . . . , mr ), the latest r values – known as Lagrangian multipliers – each correspond to a constraint: L(x1 , . . . , xn ; m1 , . . . , mr ) = f (x) + m1 · g1 (x) + · · · + mr · gr (x) We will not go into the technical details of solving this problem. We will, however, point out an essential result: if the point (x (0) ; m(0) ) is such that x (0) veriﬁes the constraints and (x (0) ; m(0) ) is a extremum (without constraint) of the Lagrangian function, then x (0) is an extremum for the problem of extrema under constraints. 1.2.2 Taylor’s formula Taylor’s formula is also generalised for the n-variable functions, but the degree 1 term, which reveals the ﬁrst derivative, is replaced by n terms with the n partial derivatives: fxi (x1(0) , x2(0) , . . . , xn(0) )

i = 1, 2, . . . , n

In the same way, the degree 2 term, the coefﬁcient of which constitutes the second derivative, here becomes a set of n2 terms in which the various second partial derivatives are involved: fxi xj (x1(0) , x2(0) , . . . , xn(0) ) i, j = 1, 2, . . . , n Thus, by limiting the writing to the degree 2 terms, Taylor’s formula is written as follows: f (x1(0) + h1 , x2(0) + h2 , . . . , xn(0) + hn ) ≈ f (x (0) ) + n

+

n

1 (0) f (x )hi 1! i=1 xi n

1 f (x (0) )hi hj + · · · 2! i=1 j =1 xi xj

332

Asset and Risk Management

1.3 MATRIX CALCULUS 1.3.1 Deﬁnitions 1.3.1.1 Matrices and vectors The term n-order matrix is given to a set of n2 real numbers making up a square table consisting of n rows and n columns.4 A matrix is generally represented by a capital letter (such as A), and its elements by the corresponding lower-case letter (a) with two allocated indices representing the row and column to which the element belongs: aij is the element of matrix A located at the intersection of row i and column j within A. Matrix A can therefore be written generally as follows:

a11 a21 .. . A= ai1 . ..

an1

a12 a22 .. . ai2 .. . an2

··· ··· ··· ···

a1j a2j .. . aij .. . anj

a1n a2n .. . · · · ain .. . · · · ann ··· ···

In the same way, a vector of n dimension is a set of n real numbers forming a columnar table. The elements in a vector are its components and are referred to by a single index.

x1 x2 .. = . X xi . .. xn 1.3.1.2 Speciﬁc matrices The diagonal elements in a matrix are the elements a11 , a22 , . . . , ann . They are located on the diagonal of the table that starts from the upper left-hand corner; this is known as the principal diagonal. A matrix is deﬁned as symmetrical if the elements symmetrical with respect to the principal diagonal are equal: aij = aji . Here is an example:

2 −3 0 √ 1 2 A = −3 √ 2 0 0 4 More generally, a matrix is a rectangular table with the format (m, n); m rows and n columns. We will, however, only be looking at square matrices here.

Mathematical Concepts

333

An upper triangular matrix is a matrix in which the elements located underneath the principal diagonal are zero: aij = 0 when i < j . For example: 0 2 −1 0 A = 0 3 0 0 5 The concept of a lower triangular matrix is of course deﬁned in a similar way. Finally, a diagonal matrix is one that is both upper triangular and lower triangular. Its only non-zero elements are the diagonal elements: aji = 0 when i and j are different. Generally, this type of matrix will be represented by: a1 0 · · · 0 0 a2 · · · 0 A= . .. . . .. = diag (a1 , a2 , . . . , an ) .. . . . 0 0 · · · an 1.3.1.3 Operations The sum of two matrices, as well as the multiplication of a matrix by a scalar, are completely natural operations: the operation in question is carried out for each of the elements. Thus: (A + B)ij = aij + bij (λA)ij = λaij These deﬁnitions are also valid for the vectors: + Y )i = xi + yi (X i = λxi (λX) The product of two matrices A and B is a matrix of the same order as A and B, in which the element (i, j ) is obtained by calculating the sum of the products of the elements in line i of A with the corresponding elements in column j in B: (AB)ij = ai1 b1j + ai2 b2j + · · · + ain bnj =

n

aik bkj

k=1

We will have, for example: −2 10 −3 0 5 −2 2 0 −1 3 −2 17 −7 0 = −4 1 · 3 −1 6 −17 6 2 0 −1 −3 2 0 Despite the apparently complex deﬁnition, the matrix product has a number of classical properties; it is associative and distributive with respect to addition. However, it needs to be handled with care as it lacks one of the classical properties: it is not commutative. AB does not equal BA!

334

Asset and Risk Management

The product of a matrix by a vector is deﬁned using the same “lines by columns” procedure: n i= aik xk (AX) k=1

The transposition of a matrix A is the matrix At , obtained by permuting the symmetrical elements with respect to the principal diagonal, or, which amounts to the same thing, by permuting the role of the lines and columns in matrix A: (At )ij = aji A matrix is thus symmetrical if, and only if, it is equal to its transposition. In addition this operation, applied to a vector, gives the corresponding line vector as its result. The inverse of matrix A is matrix A−1 , if it exists, so that: AA−1 = A−1 A = diag(1, . . . , 1) = I . For example, it is easy to verify that: −1 3 1 −1 1 0 1 −2 1 −3 = 0 0 1 −2 −1 1 0 1 0 Finally, let us deﬁne the trace of a matrix. The trace is the sum of the matrix’s diagonal elements: n tr(A) = a11 + a22 + · · · + ann = aii i=1

1.3.2 Quadratic forms 1.3.2.1 Quadratic form and class of symmetrical matrix A quadratic form is a polynomial function with n variables containing only seconddegree terms: n n aij xi xj Q(x1 , x2 , . . . , xn ) = i=1 j =1

If we construct a matrix A from coefﬁcients aij (i, j = 1, . . . , n) and the vector X of the variables xi (i = 1, . . . , n), we can give a matrix expression to the quadratic form: =X t AX. Q(X) In fact, by developing the straight-line member, we produce: t AX = X

n

i xi (AX)

i=1

=

n i=1

=

xi

n

aij xj

j =1

n n i=1 j =1

aij xi xj

Mathematical Concepts

335

A quadratic form can always be associated with a matrix A, and vice versa. The matrix, 2 however, is not unique. Infact, the quadratic form Q(x1 , x 2 ) = 3x1 − 4x 1 x2 canbe associ 3 −6 3 0 3 −2 , as well B= C= ated with matrices A = −4 0 2 0 −2 0 as inﬁnite number of others. Amongst all these matrices, only one is symmetrical (A in the example given). There is therefore bijection between all the quadratic forms and all the symmetrical matrices. The class of a symmetrical matrix is deﬁned on the basis of the sign of the associated t AX > quadratic form. Thus, the non-zero matrix A is said to be positive deﬁnite (p.d.) if X 0 for any X not equal to 0, and semi-positive (s.p.) when: t AX ≥ 0 for any X = 0 X there is one Y = 0 so that Y t AY = 0 A matrix is negative deﬁnite (n.d.) and semi-negative (s.n.) by the inverse inequalities, and the term non-deﬁnite is given to a symmetrical matrix for which there are some X t t and Y = 0 so that X AX > 0 and Y AY < 0. 5 −3 −4 10 2 is thus p.d., as the associated quadratic The symmetrical matrix A = −3 −4 2 8 form can be written as: Q(x, y, z) = 5x 2 + 10y 2 + 8z2 − 6xy − 8xz + 4yz = (x − 3y)2 + (2x − 2z)2 + (y + 2z)2 This form will never be negative, and simply cancels out when: x − 3y = 0 2x − 2z = 0 y + 2z = 0 That is, when x = y = z = 0. 1.3.2.2 Linear equation system A system of n linear equations with n unknowns is a set of relations of the following type: a x + a12 x2 + · · · + a1n xn = b1 11 1 a21 x1 + a22 x2 + · · · + a2n xn = b2 ··· a x + a x + ··· + a x = b n1 1 n2 2 nn n n In it, the aij , xj and bi are respectively the coefﬁcients, the unknowns and the second members. They are written naturally in both matrix and vectorial form: A, X and B. Using this notation, the system is written in an equivalent but more condensed way: AX = B

336

Asset and Risk Management

For example, the system of equations

can also be written as:

2x + 3y = 4 4x − y = −2

2 3 4 −1

4 x = −2 y

If the inverse of matrix A exists, it can easily be seen that the system admits one and just one solution, given as X = A−1 X. 1.3.2.3 Case of variance–covariance matrix5 2 σ1 σ12 · · · σ1n σ21 σ22 · · · σ2n The matrix V = .. .. .. , for the variances and covariances of a number .. . . . . σn1 σn2 · · · σn2 of random variables X1 , X2 , . . . , Xn is a matrix that is either p.d. or s.p. In effect, regardless of what the numbers λ1 , λ2 , . . . , λn are, not all zero and making we have: up the vector , = V t

n n i=1 j =1

λi λj σij = var

n

λi Xi

≥0

i=1

It can even be said, according to this result, that the variance–covariance matrix V is p.d. except when there are coefﬁcients λ1 , λ2 , . . . , λn that are not all zero, so that the random variable λ1 X1 + · · · + λn Xn = ni=1 λi Xi is degenerate, in which case V will be s.p. This degeneration may occur, for example, when: • one of the variables is degenerate; • some variables are perfectly correlated; • the matrix V is obtained on the basis of observations of a number strictly lower than the number of variables. It will then be evident that the variance–covariance matrix can be expressed as a matrix, through the relation: − µ)( − µ) V = E[(X X t] 1.3.2.4 Choleski factorisation Consider a symmetrical matrix A positive deﬁnite. It can be demonstrated that there exists a lower triangular matrix L with strictly positive diagonal elements so that A = LLt . 5

The concepts necessary for an understanding of this example are shown in Appendix 2.

Mathematical Concepts

337

This factorisation process is known as a Choleski factorisation. We will not be demonstrating this property, but will show, using the previous example, how the matrix L is found: 2 a ab ad a b d a 0 0 bd + cf LLt = b c 0 0 c f = ab b2 + c2 2 0 0 g d f g ad bd + cf d + f 2 + g 2 5 −3 −4 10 2 = A = −3 −4 2 8 It is then sufﬁcient to work the last equality in order to ﬁnd a, b, c, d, f and g in succession, which will give the following for matrix L. √ 5 √ 3 5 L = − 5 4√5 − 5

√

0

205 5 √ 2 205 − 205

0

0 √ 14 41 41

Appendix 2 Probabilistic Concepts1 2.1 RANDOM VARIABLES 2.1.1 Random variables and probability law 2.1.1.1 Deﬁnitions Let us consider a fortuitous phenomenon, that is, a phenomenon that under given initial conditions corresponds to several possible outcomes. A numerical magnitude that depends on the observed result is known as a random variable or r.v. In addition, probabilities are associated with various possible results or events deﬁned in the context of the fortuitous phenomenon. It is therefore interesting to ﬁnd out the probabilities of the various events deﬁned on the basis of the r.v. What we are looking at here is the concept of law of probability of the r.v. Thus, if the r.v. is termed X, the law of probability of X is deﬁned by the range of the following probabilities: Pr[X ∈ A], for every subset A of R. The aim of the concept of probability law is a bold one: the subsets A of R are too numerous for all the probabilities to be known. For this reason, we are content to work with just the ]−∞; t] sets. This therefore deﬁnes a function of the variable t, the cumulative distribution function or simplier distribution function (d.f.) of the random variable F (t) = Pr[X ≤ t]. It can be demonstrated that this function, deﬁned in R, is increasing, that it is between 0 1 and 1, that it admits the ordinates 0 and 1 as horizontal asymptotics limt→±∞ F (t) = , 0 and that it is right-continuous: lims→t+ F (s) = F (t). These properties are summarised in Figure A2.1. In addition, despite its simplicity, the d.f. allows almost the whole of the probability law for X to be found, thus: Pr[s < X ≤ t] = F (t) − F (s) Pr[X = t] = F (t) − F (t−) 2.1.1.2 Quantile Sometimes there is a need to solve the opposite problem: being aware of a probability level u and determining the value of t so that F (t) = Pr[X ≤ t] = u. This value is known as the quantile of the r.v. X at point u and its deﬁnition are shown in Figure A2.2. 1 Readers wishing to ﬁnd out more about these concepts should read: Baxter M. and Rennie A., Financial Calculus, Cambridge University Press, 1996. Feller W., An Introduction to Probability Theory and its Applications (2 volumes), John Wiley and Sons, Inc., 1968. Grimmett G. and Stirzaker D., Probability and Random Processes, Oxford University Press, 1992. Roger P., Les outils de la mod´elisation ﬁnanci`ere, Presses Universitaires de France, 1991. Ross S. M., Initiation aux probabilit´es, Press Polytechniques et Universitaires Romandes, 1994.

340

Asset and Risk Management F(t) 1

t

0

Figure A2.1 Distribution function

F(t) 1

u

0

Q(u)

t

Figure A2.2 Quantile

F(t) 1

u 0

Q(u)

t

Figure A2.3 Quantile in jump scenario

In two cases, however, the deﬁnition that we have just given is unsuitable and needs to be adapted. First of all, if the d.f. of X shows a jump that covers the ordinate u, none of the abscissas will correspond to it and the abscissa of the jump, naturally, will be chosen (see Figure A2.3). Next, if the ordinate u corresponds to a plateau on the d.f. graph, there is an inﬁnite number of abscissas on the abscissa to choose from (see Figure A2.4). In this case, an abscissa deﬁned by the relation Q(u) = um + (1 − u)M can be chosen. The quantile function thus deﬁned generalises the concept of the reciprocal function of the d.f. 2.1.1.3 Discrete random variable A discrete random variable corresponds to a situation in which the set of possible values for the variable is ﬁnite or inﬁnite countable. In this case, if the various possible values

Probabilistic Concepts

341

F(t) 1

u

m Q(u) M

0

Figure A2.4 Quantile in plateau scenario

and corresponding probabilities are known x1 x2 · · · p1 p2 · · · Pr[X = xi ] = pi pi = 1

xn pn

··· ···

i = 1, 2, . . . , n, . . .

i

The law of probability of X can be easily determined: Pr[X ∈ A] = pi {i:xi ∈A}

The d.f. of a discrete r.v. is a stepped function, as the abscissas of jumps correspond to the various possible values of X and the heights of the jumps are equal to the associated probabilities (see Figure A2.5). In particular, a r.v. is deﬁned as degenerate if it can only take on one value x (also referred to as a certain variable): Pr[X = x] = 1. The d.f. for a degenerate variable will be 0 to the left of x and 1 from x onwards. 2.1.1.4 Continuous random variable In contrast to the discrete r.v., the set of possible values for a r.v. could be continuous (an interval, for example) with no individual value having a strictly positive probability: Pr[X = x] = 0

∀x

F(t) 1

p4 0

x4

Figure A2.5 Distribution function for a discrete random variable

342

Asset and Risk Management f

x x+h

Figure A2.6 Probability density

In this case, the distribution of probabilities over the set of possible values is expressed using a density function f : for a sufﬁciently small h, we will have Pr[x < X ≤ x + h] ≈ hf (x). This deﬁnition is shown in Figure A2.6. The law of probability is obtained from the density through the following relation: Pr[X ∈ A] = f (x) dx A

And as a particular case:

F (t) =

t −∞

f (x) dx

2.1.1.5 Multivariate random variables Often there is a need to consider several r.v.s simultaneously X1 , X2 , . . . , Xm , associated with the same fortuitous phenomenon.2 Here, we will simply show the theory for a bivariate random variable, that is, a pair of r.v.s (X, Y ); the general process for a multivariate random variable can easily be deduced from this. The law of probability for a bivariate random variable is deﬁned as the set of the following probabilities: Pr [(X, Y ) ∈ A], for every subset A of R2 . The joint distribution function is deﬁned F (s, t) = Pr([X ≤ s] ∩ [Y ≤ t]) and the discrete and continuous bivariate random variables are deﬁned respectively by: pij = Pr([X = xi ] ∩ [Y = yj ]) f (x, y) dx dy Pr[(X, Y ) ∈ A] = A

Two r.v.s are deﬁned as independent when they are not inﬂuenced either from the point of view of possible values or through the probability of the events that they deﬁne. More formally, X and Y are independent when: Pr([X ∈ A] ∩ [Y ∈ B]) = Pr[X ∈ A] · Pr[Y ∈ B] for every A and B in R. 2

For example, the return on various ﬁnancial assets.

Probabilistic Concepts

343

It can be shown that two r.v.s are independent if, and only if, their joint d.f. is equal to the product of the d.f.s of each of the r.v.s: F (s, t) = FX (s) · FY (t). And that this condition, for discrete or continuous random variables, shows as: pij = Pr[X = xi ] · Pr[Y = yj ] f (x, y) = fX (x) · fY (y) 2.1.2 Typical values of random variables The aim of the typical values of a r.v. is to summarise the information contained in its probability law in a number of representative parameters: parameters of location, dispersion, skewness and kurtosis. We will be looking at one from each group. 2.1.2.1 Mean The mean is a central value that locates a r.v. by dividing the d.f. into two parts with the same area (see Figure A2.7). The mean µ of the r.v. X is therefore such that: µ +∞ F (t) dt = [1 − F (t)] dt −∞

µ

The mean of a r.v. can be calculated on the basis of the d.f.: 0 +∞ [1 − F (t)] dt − F (t) dt µ= −∞

0

the formula reducing for a positive r.v. as follows: +∞ µ= [1 − F (t)] dt 0

It is possible to demonstrate that for a discrete r.v. and a continuous r.v., we have the formulae: xi pi µ= µ=

Figure A2.7 Mean of a random variable

i +∞

−∞

xf (x) dx

344

Asset and Risk Management

The structure of these two formulae shows that µ integrates the various possible values for the r.v. X by weighting them through the probabilities associated with these values. It can be shown3 that these formulae generalised into an abstract integral of X(ω) with respect to the measure of probability Pr in the set of the possible outcomes ω of the fortuitous phenomenon. This integral is known as the expectation of the r.v. X: E(X) =

X(ω)d Pr(ω)

According to the foregoing, there is equivalence between the concepts of expectation and mean (E(X) = µ) and we will interchange both these terms from now on. The properties of the integral show that the expectation is a linear operator: E(aX + bY + c) = aE(X) + bE(Y ) + c And that if X and Y are independent, them E(XY ) = E(X) · E(Y ). In addition, for a discrete r.v. or a continuous r.v., the expectation of a function of a r.v. variable is given by: E(g(X)) =

E(g(X)) =

g(xi )pi

i +∞ −∞

g(x)f (x) dx

Let us remember ﬁnally the law of large numbers,4 which for a sequence of independent r.v.s X1 , X2 , . . . , Xn with identical distribution and a mean µ, expresses that regardless of what ε > 0 may be X1 + X2 + · · · + Xn lim Pr − µ ≤ ε = 1 n→∞ n This law justiﬁes taking the average of a sample to estimate the mean of the population and in particular estimating the probability of an event through the frequency of that event’s occurrence when a large number of realisations of the fortuitous phenomenon occur. 2.1.2.2 Variance and standard deviation One of the most commonly used dispersion indices (that is, a measurement of the spread of the r.v.s values around its mean) is the variance σ 2 , deﬁned as: σ 2 = var(X) = E[(X − µ)2 ] 3 This development is part of measure theory, which is outside the scope of this work. Readers are referred to Loeve M., Probability Theory (2 volumes), Springer-Verlag, 1977. 4 We are showing this law in its weak form here.

Probabilistic Concepts

345

f

x

Figure A2.8 Variance of a random variable

By developing the right member, we can therefore arrive at the variance σ 2 = E(X2 ) − µ2 For a discrete r.v. and a continuous r.v., this will give: (xi − µ)2 pi = xi 2 pi − µ2 σ2 = σ2 =

i

i +∞

−∞

(x − µ)2 f (x) dx =

+∞

−∞

x 2 f (x) dx − µ2

An example of the interpretation of this parameter is found in Figure A2.8. It can be demonstrated that var(aX + b) = a 2 var(X). And that if X and Y are independent, then var(X + Y ) = var(X) + var(Y ). Alongside the variance, the dimension of which is the square of the dimension of X, we can also use the standard deviation, which is simply the square root: σ = var(X) 2.1.2.3 Fisher’s skewness and kurtosis coefﬁcients Fisher’s skewness coefﬁcient is deﬁned by: γ1 =

E[(X − µ)3 ] σ3

It is interpreted essentially on the basis of its sign: if γ1 > 0 (resp. 0) exp − 2 σ 2πσ x The graph for this density is shown in Figure A2.12 and its typical values are given by: E(X) = eµ+

σ2 2

var(X) = e2µ+σ (eσ − 1) 2 γ1 (X) = (eσ + 2) eσ 2 − 1 2

2

γ2 (X) = (e3σ + 3e2σ + 6eσ + 6)(eσ − 1) 2

2

2

2

This conﬁrms the skewness with concentration to the left and the spreading to the right, observed on the graph.

350

Asset and Risk Management f(x)

x

Figure A2.12 Log-normal distribution

We would point out ﬁnally that a result of the same type as the central limit theorem also leads to the log-normal law: this is the case in which the effects represented by the various r.v.s accumulate through a multiplication model rather than through an addition model, because of the fundamental property of the logarithms: ln(x1 · x2 ) = ln x1 + ln x2 .

2.2.2 Other theoretical distributions 2.2.2.1 Poisson distribution The Poisson r.v., with parameter µ, is a discrete X r.v. that takes all the complete positive integer values 0, 1, 2 etc. with the associated probabilities of: Pr[X = k] = e−µ

µk k!

k∈N

The typical values for this distribution are given by: E(X) = µ var(X) = µ

2.2.2.2 Binomial distribution The Bernoulli scheme is a probability model applied to a very wide range of situations. It is characterised by • a ﬁnite number of independent trials; • during each trial, two results only – success and failure – are possible; • also during each trial, the probability of a success occurring is the same. If n is the number of trials and p the probability of each success succeeding, the term used is Bernoulli scheme with parameters (n; p) and the number of successes out of the

Probabilistic Concepts

351

n tests is a binomial parameter r.v., termed B(n, p). This discrete random variable takes the values 0, 1, 2, . . . , n with the following associated probabilities:5 n p k (1 − p)n−k Pr[B(n; p) = k] = k ∈ {0, 1, . . . , n} k The sum of these probabilities equals 1, in accordance with Newton’s binomial formula. In addition, the typical values for this distribution are given by: E(B(n; p)) = np var(B(n; p)) = np(1 − p) The binomial distribution allows two interesting approximations when the n parameter is large. Thus, for a very small p, we have the approximation through Poisson’s law with np parameter: (np)k Pr[B(n; p) = k] ≈ e−np k! For a p that is not √ to close to 0 or 1, the binomial r.v. tends towards a normal law with parameters (np; np(1 − p)), and more speciﬁcally: k − µ − 12 k − µ + 12 − Pr[B(n; p) = k] ≈ σ σ 2.2.2.3 Student distribution The Student distribution, with n degrees of freedom, is deﬁned by the density −(ν+1)/2 ) ( ν+1 x2 2 1+ f (x) = √ ν ( ν2 ) νπ +∞ In this expression, the gamma function is deﬁned by (n) = 0 e−x x n−1 dx. This generalises the factorial function as (n) = (n − 1) · (n − 1) and for integer n, we have: (n) = (n − 1)! This is, however, deﬁned for n values that are not integer: all the positive real values of n and, for example: √ ( 12 ) = π We are not representing the graph for this density here, as it is symmetrical with respect to the vertical axis and bears a strong resemblance to the standard normal density graph, although for ν > 4 the kurtosis coefﬁcient value is strictly positive: E(X) = 0

ν ν −2 γ1 (X) = 0 6 γ2 (X) = ν −4

var(X) =

5

Remember that

p! n = k p!(n − p)!

352

Asset and Risk Management

Finally, it can be stated that when the number of degrees of freedom tends towards inﬁnity, the Student distribution tends towards the standard normal distribution, this asymptotic property being veriﬁed in practice as soon as ν reaches the value of 30. 2.2.2.4 Uniform distribution A r.v. is said to be uniform in the interval [a; b] when the probability of its taking a value between t and t + h6 depends only on these two boundaries through h. It is easy to establish, on that basis, that we are looking at a r.v. that only takes a value within the interval [a; b] and that its density is necessarily constant: f (x) = 1/(b − a) (a < x < b) Its graph is shown in Figure A2.13. The principal typical values for the uniform r.v. are given by: a+b 2 (a − b)2 var(X) = 12 γ1 (X) = 0 6 γ2 (X) = − 5 E(X) =

This uniform distribution is the origin of some simulation methods, in which the generation of random numbers distributed uniformly in the interval [0; 1] allows distributed random numbers to be obtained according to a given law of probability (Figure A2.14). The way in which this transformation occurs is explained in Section 7.3.1. Let us examine here how the (pseudo-) random numbers uniformly distributed in [0; 1] can be obtained. The sequence x1 , x2 , . . . , xn is constructed according to residue classes. On the basis of an initial value of ρ0 (equal to 1, for example), we can construct for i = 1, 2, . . . , n etc.: xi = decimal part of (c1 ρi−1 ) ρi = c2 xi Here, the constants c1 and c2 are suitably chosen, Thus, for c1 = 13.3669 and c2 = 94.3795, we ﬁnd successively as shown in Table A2.1: f(x)

1/(b – a)

a

Figure A2.13 Uniform distribution 6

These two values are assumed to belong to the interval [a; b].

b

x

Probabilistic Concepts

353

1

0

Figure A2.14 Random numbers uniformly distributed in [0; 1]

Table A2.1 i 0 1 2 3 4 5 6 7 8 9 10

xI and ρI xi

ρi

0.366900 0.866885 0.580898 0.453995 0.742910 0.226992 0.364227 0.494233 0.505097 0.210265

34.627839 81.813352 55.768652 42.847849 70.115509 21.423384 34.375527 46.645452 47.670759 19.844676

2.2.2.5 Generalised error distribution The generalised distribution of errors for parameter ν is deﬁned by the density √

f (x) =

3ν 3/2 1 2 ν

ν |x| exp − ν 1 . 3 ν

The graph for this density is shown in Figure A2.15. This is a distribution symmetrical with respect to 0, which corresponds to a normal distribution for n = 2 and gives rise to a leptokurtic distribution (resp. negative kurtosis distribution) for n < 2 (n > 2).

2.3 STOCHASTIC PROCESSES 2.3.1 General considerations The term stochastic process is applied to a random variable that is a function of the time variable: {Xt : t ∈ T }.

354

Asset and Risk Management f (x)

v=1 v=2 v=3

x

0

Figure A2.15 Generalised error distribution

If the set T of times is discrete, the stochastic process is simply a sequence of random variables. However, in a number of ﬁnancial applications such as Black and Scholes’ model, it will be necessary to consider stochastic processes in continuous time. For each possible result ω ∈ , the function of Xt (ω) of the variable t is known as the path of the stochastic process. A stochastic process is said to have independent increments when, regardless of the times t1 < t2 < . . . < tn , the r.v.s Xt1 , Xt2 − Xt1 , Xt3 − Xt2 , . . . are independent. In the same way, a stochastic process is said to have stationary increments when for every t and h the r.v.s Xt+h − Xt and Xh are identically distributed. 2.3.2 Particular stochastic processes 2.3.2.1 The Poisson process We consider a process of random occurrences of an event in time, corresponding to the set [0; +∞[. Here, the principal interest does not correspond directly to the occurrence times, but to the number of occurrences within given intervals. The r.v. that represents the number of occurrences within the interval [t1 , t2 ] is termed n(t1 , t2 ). This process is called a Poisson process if it obeys the following hypotheses: • the numbers of occurrences in separate intervals of time are independent; • the distribution of the number of occurrences within an interval of time only depends on that interval through its duration: Pr[n(t1 , t2 ) = k] is a function of (t2 − t1 ), which is henceforth termed pk (t2 − t1 ); • there is no multiple occurrence: if h is low, Pr[n(0; h) ≥ 2] = o(h); • there is a rate of occurrence α so that Pr[n(0; h) = 1] = αh + o(h). It can be demonstrated that under these hypotheses, the r.v. ‘number of occurrences within an interval of duration t’ is distributed according to a Poisson law for parameter αt: pk (t) = e−αt

(αt)k k!

k = 0, 1, 2, . . .

Probabilistic Concepts

355

To simplify, we note Xt = n(0; t). This is a stochastic process that counts the number of occurrences over time. The path for such a process is therefore a stepped function, with the abscissas for the jumps corresponding to the occurrence times and the heights of the jumps being equal to 1. It can be demonstrated that the process has independent and stationary increments and that E(Xt ) = var(Xt ) = αt. This process can be generalised as follows. We consider: • A Poisson process Xt as deﬁned above; with the time of the k th occurrence expressed as Tk , we have: Xt = #{k : Tk ≤ t}. • A sequence Y1 , Y2 , . . . of independent and identically distributed r.v.s, independent of the Poisson process. The process Zt = {k:Tk ≤t} Yk is known as a compound Poisson process. The paths of such a process are therefore stepped functions, with the abscissas for the jumps corresponding to the occurrence times for the subjacent Poisson process and the heights of the jumps being the realised values of the r.v.s Yk . In addition, we have: E(Zt ) = αt · µY var(Zt ) = αt · (σ 2 Y + µ2 Y ) 2.3.2.2 Standard Brownian motion Consider a sequence of r.v.s Xk , independent and identically distributed, with values (− X) and X with respective probabilities 1/2 and 1/2, and deﬁne the sequence of r.v.s as Yn through Yn = X1 + X2 + · · · + Xn . This is known as a symmetrical random walk. As E(Xk ) = 0 var(Xk ) = ( X)2 , we have E(Yn ) = 0 var(Yn ) = n( X)2 . For our modelling requirements, we separate the interval of time [0; t] in n subintervals of the same duration t = t/n and deﬁne Zt = Zt(n) = Yn . We have: E(Zt ) = 0

var(Yn ) = n( X)2 =

( X)2 t. t

This variable Zt allows the discrete development of a magnitude to be modelled. If we then wish to move to continuous modelling while retaining the same variability per ( X)2 unit of time, that is, with: = 1, for example, we obtain the stochastic process t (n) wt = limn→∞ Zt . This is a standard Brownian motion (also known as a Wiener process). It is clear that this stochastic process wt , deﬁned on R+ , is such that w0 = 0, that wt has independent and stationary increments, and that in view of the √ central limit theorem wt is distributed according to a normal law with parameters (0; t). It can be shown that the paths of a Wiener process are continuous everywhere, but cannot generally be differentiated. In fact √ wt ε t ε = =√ t t t where, ε is a standard normal r.v.

356

Asset and Risk Management

2.3.2.3 Itoˆ process If a more developed model is required, wt can be multiplied by a constant in order to produce variability per time unit ( X)2 / t different from 1 or to add a constant to it in order to obtain a non-zero mean: Xt = X0 + b · wt This type of model is not greatly effective because of the great variability √ of the development in the short term, the standard deviation of Xt being equal7 to b t. For this reason, this type of construction is applied more to variations relating to a short interval of time: dXt = a · dt + b · dwt It is possible to generalise by replacing the constants a and b by functions of t and Xt : dXt = at (Xt ) · dt + bt (Xt ) · dwt This type of process is known as the Itˆo process. In ﬁnancial modelling, several speciﬁc cases of Itˆo process are used, and a geometric Brownian motion is therefore obtained when: at (Xt ) = a · Xt

bt (Xt ) = b · Xt

An Ornstein–Uhlenbeck process corresponds to: at (Xt ) = a · (c − Xt )

bt (Xt ) = b

and the square root process is such that: √ bt (Xt ) = b Xt

at (Xt ) = a · (c − Xt ) 2.3.3 Stochastic differential equations

Expressions of the type dXt = at (Xt ) · dt + bt (Xt ) · dwt cannot simply be handled in the same way as the corresponding determinist expressions, because wt cannot be derived. It is, however, possible to extend the deﬁnition to a concept of stochastic differential, through the theory of stochastic integral calculus.8 As the stochastic process zt is deﬁned within the interval [a; b], the stochastic integral of zt is deﬁned within [a; b] with respect to the standard Brownian motion wt by: a

7 8

b

zt dwt = lim n→∞ δ→0

n−1

ztk (wtk+1 − wtk )

k=0

The root function presents a vertical tangent at the origin. The full development of this theory is outside the scope of this work.

Probabilistic Concepts

where, we have:

357

a = t0 < t1 < . . . < tn = b δ = max (tk − tk−1 ) k=1,...,n

Let us now consider a stochastic process Zt (for which we wish to deﬁne the stochastic differential)and a standard Brownian motion wt . If there is a stochastic process zt such that t Zt = Z0 + 0 zs dws , then it is said that Zt admits the stochastic differential dZt = zt dwt . This differential is interpreted as follows: the stochastic differential dZt represents the variation (for a very short period of time dt) of Zt , triggered by a random variation dwt weighted by zt , which represents the volatility of Zt at the moment t. More generally, the deﬁnition of dXt = at (Xt ) · dt + bt (Xt ) · dwt is given by Xt2 − Xt1 =

t2 t1

at (Xt ) dt +

t2 t1

bt (Xt ) dwt

The stochastic differential has some of the properties of ordinary differentials, such as linearity. Not all of them, however, remain true. For example,9 the stochastic differential of a product of two stochastic processes for which the stochastic differential of the factors is known, dXt(i) = at(i) dt + bt(i) dwt i = 1, 2 is given by:

d(Xt(1) Xt(2) ) = Xt(1) dXt(2) + Xt(2) dXt(1) + bt(1) bt(2) dt

Another property, which corresponds to the derivation formula for a compound function, is the well-known Itoˆ formula.10 This formula gives the differential for a two-variable function: a stochastic process for which the stochastic differential is known, and time. If the process Xt has the stochastic differential dXt = at dt + bt dwt and if f (x, t) is a C2 -class function, the process f (X, t) will admit the following stochastic differential: 1 2 df (Xt , t) = ft (Xt , t) + fx (Xt , t)at + fxx (Xt , t)bt · dt + fx (Xt , t)bt · dwt 2

9

We will from now on leave out the argument X in the expression of s functions a and b. Also known as the Itˆo’s lemma.

10

Appendix 3 Statistical Concepts1 3.1 INFERENTIAL STATISTICS 3.1.1 Sampling 3.1.1.1 Principles In inferential statistics, we are usually interested in a population and the variables measured on the individual members of that population. Unfortunately, the population as a whole is often far too large, and sometimes not sufﬁciently well known, to be handled directly. For cases of observed information, therefore, we must conﬁne ourselves to a subset of the population, known as a sample. Then, on the basis of observations made in relation to that sample, we attempt to deduce (infer) conclusions in relation to the population. The operation that consists of extracting the sample from the population is known as sampling. It is here that probability theory becomes involved, constituting the link between the population and the sample. It is deﬁned as simply random when the individual members are extracted independently from the population and all have the same probability of being chosen. In practice, this is not necessarily the case and the procedures set up for carrying out the sampling process must imitate the chance as closely as possible. 3.1.1.2 Sampling distribution Suppose that we are interested in a parameter θ of the population. If we extract a sample x1 , x2 , . . . , xn from the population, we can calculate the parameter θ for this sample θ (x1 , x2 , . . . , xn ). As the sampling is at the origin of the fortuitous aspect of this procedure, for another sample x1 , x2 , . . . , xn , we would have obtained another parameter value θ (x1 , x2 , . . . , xn ). We are therefore constructing a r.v. , in which the various possible values are the results of the calculation of θ for all the possible samples. The law of probability for this r.v. is known as the sampling distribution. In order to illustrate this concept, let us consider the sampling distribution for the mean of the population and suppose that the variable considered has a mean and variance µ and σ 2 respectively. On the basis of the various samples, it is possible to calculate an average on each occasion: n

1 x= xi n i=1

n

1 x = x n i=1 i

···

1 Readers interested in ﬁnding out more about the concepts developed below should read: Ansion G., Econom´etrie pour l’enterprise, Eyrolles, 1988. Dagnelie P., Th´eorie et m´ethodes statistique, (2 volumes), Presses Agronomiques de Gembloux, 1975. Johnston J., Econometric Methods, McGraw-Hill, 1972. Justens D., Statistique pour d´ecideurs, De Boeck, 1988. Kendall M. and Stuart A., The Advanced Theory of Statistics (3 volumes), Grifﬁn, 1977.

360

Asset and Risk Management

We thus deﬁne a r.v. X for which it can be demonstrated that: E(X) = µ var(X) =

σ2 n

The ﬁrst of these two relations justiﬁes the choice of the average of the sample as an estimator for the mean of the population. It is referred to as an unbiased estimator. Note If we examine in a similar way thesampling distribution for the variance, calculated on the basis of a sample using s 2 = n1 ni=1 (xi − x)2 , the associated r.v. S 2 will be such that n−1 2 σ . E(S 2 ) = n We are no longer looking at an unbiased estimator, but an asymptotically unbiased estimator (for n tending towards inﬁnity). For this reason, we frequently choose the 1 n (xi − x)2 . following expression as an estimator for the variance: n − 1 i=1 3.1.2 Two problems of inferential statistics 3.1.2.1 Estimation If the problem is therefore one of estimating a parameter θ of the population, we must construct an estimator that is a function of the values observed through the sampling procedure. It is therefore important for this estimator to be of good quality for evaluating the parameter θ . We thus often require an unbiased estimator: E() = θ . Nevertheless, of all the unbiased estimators, we want the estimator adopted to have other properties, and most notably its dispersion around the central value θ to be as small as possible. Its variance var() = E(( − θ )2 ) must be minimal.2 Alongside this selective estimation (there is only one estimation for a sample), a precision is generally calculated for the estimation by determining an interval [1 ; 2 ] centred on the value that contains the true value of the parameter θ to be estimated with a given probability: Pr[1 ≤ θ ≤ 2 ] = 1 − α with α = 0.05, for example. This interval is termed the conﬁdence interval for θ and the number (1 − α) is the conﬁdence coefﬁcient. This estimation by conﬁdence interval is only possible if one knows the sampling distribution for θ , for example because the population obeys this or that known distribution or if certain asymptotic results, such as central limit theorem, can be applied to it. Let us examine, by way of an example, the estimate of the mean of a normal population through conﬁdence interval. It is already known that the ‘best’ estimator is the average σ of sampling, which is distributed following a normal law with parameters µ; √ and n 2

For example, the sample average is the unbiased estimator for the minimal variance for the average of the population.

Statistical Concepts

361

X−µ √ is thus standard normal. If the quantile for this last distribution is termed σ/ n Q(u), we have: α X − µ α =1−α ≤ Pr Q √ ≤Q 1− 2 2 σ/ n α

α σ σ ≤µ≤X− √ Q =1−α Pr X − √ Q 1 − 2 2 n n

α α σ σ ≤µ≤X+ √ Q 1− =1−α Pr X − √ Q 1 − 2 2 n n the r.v.

This last equality makes up the conﬁdence interval formula for the mean; it can also be written more concisely as: α σ (s.p.α) I.C.(µ) : X ± √ Q 1 − 2 n We indicate that in this last formula, the standard deviation for the population σ is generally not known. If it is replaced by its estimator calculated on the basis of the sample, the quantile for the distribution must be replaced by the quantile relative to the Student distribution at (n − 1) degrees of freedom. 3.1.2.2 Hypothesis test The aim of a hypothesis test is to conﬁrm or refute a hypothesis formulated by a population, on the basis of a sample. In this way, we will know: • The goodness-of-ﬁt tests: verifying whether the population from which the sample is taken is distributed according to a given law of probability. • The independence tests between certain classiﬁcation criteria deﬁned on the population (these are also used for testing independence between r.v.s). • The compliance tests: verifying whether a population parameter is equal to a given value. • The homogeneity tests: verifying whether the values for a parameter measured on more than one population are the same (this requires one sample to be extracted per population). The procedure for carrying out a hypothesis test can be shown as follows. After deﬁning the hypothesis to be tested H0 , also known as the null hypothesis, and the alternative hypotheses H1 , we determine under H0 the sampling distribution for the parameter to be studied. With the ﬁxed conﬁdence coefﬁcient (1 − α), the sample is allocated to the region of acceptance (AH0 ) or to the region of rejection (RH0 ) within H0 . Four situations may therefore arise depending on the reality on one hand and the decision taken on the other hand (see Table A3.1). Zones (a) and (d) in Table A3.1 correspond to correct conclusions of the test. In zone (b) the hypothesis is rejected although it is true; this is a ﬁrst-type error for which the probability is the complementary α of the conﬁdence coefﬁcient ﬁxed beforehand. In zone

362

Asset and Risk Management Table A3.1

Hypothesis test conclusions Decision

reality

AH0

RH0

H0 H1

a c

b d

(c), the hypothesis is accepted although it is false; this is a second-type error for which the probability β is unknown. A good test will therefore have a small parameter β; the complementary (1 − β) of this probability is called the power of the test. By way of an example, we present the compliance test for the mean of a normal population. The hypothesis under test is, for example, H0 : µ = 1. The rival hypothesis is written as: H1 : µ = 1. X−1 Under H0 , the r.v. √ follows a normal law and the hypothesis being tested will σ/ n therefore be rejected when: X − 1 α √ >Q 1− 2 σ/ n

(s.p.α).

Again, the normal distribution quantile is replaced by the quantile for the Student distribution with (n − 1) degrees of freedom if the standard deviation for the population is replaced by the standard deviation for the sample.

3.2 REGRESSIONS 3.2.1 Simple regression Let us assume that a variable Y depends on another variable X through a linear relation Y = aX + b and that a series of observations is available for this pair of variables (X, Y ): (xt , yt ) t = 1, . . . , n. 3.2.1.1 Estimation of model If the observation pairs are represented on the (X, Y ) plane, it will be noticed that there are differences between them and a straight line (see Figure A3.1). These differences Y Y = aX + b yt

εt

axt + b

xt

Figure A3.1 Simple regression

X

Statistical Concepts

363

may arise, especially in the ﬁeld of economics, through failure to take account of certain explanatory factors of variable Y . It is therefore necessary to ﬁnd the straight line that passes as closely as possible to the point cloud, that is, the straight line for which εt = yt − (axt + b) are as small as possible overall. The criterion most frequently used is that of minimising the sum of the squares of these differences (referred to as the least square method ). The problem is therefore one of searching for the parameters a and b for which the expression n t=1

εt2 =

n

2 yt − (axt + b) t=1

is minimal. It can be easily shown that these parameters total: n

sxy aˆ = 2 = sx

t=1

(xt − x)(yt − y) n

(xt − x)2

t=1

bˆ = y − ax ˆ These are unbiased estimators of the real unknown parameters a and b. In addition, of all the unbiased estimators expressed linearly as a function of yt , they are the ones with the smallest variance.3 The straight line obtained using the procedure is known as the regression line. 3.2.1.2 Validation of model The signiﬁcantly explanatory character of the variable X in this model can be proved by testing the hypothesis H0 : a = 0. If we are led to reject the hypothesis, it is because X signiﬁcantly explains Y through the model, that is therefore validated. Because under certain probability hypotheses on the residuals εt the estimator for a is distributed according to a Student law with (n − 2) degrees of freedom, the hypothesis will be rejected (and the model therefore accepted) if aˆ (n−2) > t1−α/2 (s.p.α) sa where sa is the standard deviation for the estimator for a, measured on the observations. 3.2.2 Multiple regression The regression model that we have just presented can be generalised when several explanatory variables are involved at once: Y = α0 + α1 X1 + · · · + αk Xk . 3

They are referred to as BLUE (Best Linear Unbiased Estimators).

364

Asset and Risk Management

In this case, if the observations x and y and the parameters α are presented as matrices y1 α1 1 x11 · · · x1k y2 α2 1 x21 · · · x2k X=. .. .. Y = .. α = .. , .. . .. . . . . yn αn 1 xn1 · · · xnk it can be shown that the vector for the parameter estimations is given by αˆ = (Xt X)−1 (Xt Y ). In addition, the Student validation test shown for the simple regression also applies here. It is used to test the signiﬁcantly explanatory nature of a variable within the multiple model, the only alteration being the number of degrees of freedom, which passes from (n − 2) to (n − k − 1). We should mention that there are other tests for the overall validity of the multiple regression model. 3.2.3 Nonlinear regression It therefore turns out that the relation allowing Y to be explained by X1 , X2 , . . . , Xk is not linear: Y = f (X1 , X2 , . . . , Xk ). In this case, sometimes, the relation can be made linear by a simple analytical conversion. For example, Y = aXb is converted by a logarithmic transformation: ln Y = ln a + b ln X Y ∗ = a ∗ + bX∗ We are thus brought round to a linear regression model. Other models cannot be transformed quite so simply. Thus, Y = a + Xb is not equivalent to the linear model. In this case, much better developed techniques, generally of an iterative nature, must be used to estimate the parameters for this type of model.

Appendix 4 Extreme Value Theory 4.1 EXACT RESULT Let us consider a sequence of r.v.s X1 , X2 , . . . , Xn , independent and identically distributed with a common distribution function FX . Let us also consider the sequence of r.v.s Z1 , Z2 , . . . , Zn , deﬁned by: Zk = max(X1 , . . . , Xk ).

k = 1, . . . , n

The d.f. for Zn is given by: F (n) (z) = Pr[max(X1 , . . . , Xn ) ≤ z] = Pr([X1 ≤ z] ∩ · · · ∩ [Xn ≤ z]) = Pr[X1 ≤ z] · · · · ·Pr[Xn ≤ z] = FXn (z) Note When one wishes to study the distribution of an extreme Zn for a large number n of r.v.s, the precise formula established by us is not greatly useful. In fact, we need to have a result that does not depend essentially on the d.f., as Fx is not necessarily known with any great accuracy. In addition, when n tends towards the inﬁnite, the r.v. Zn tends towards a degenerate r.v., as: 0 si FX (z) < 1 lim F (n) (z) = 1 si FX (z) = 1 n→∞ It was for this reason that asymptotic extreme value theory was developed.

4.2 ASYMPTOTIC RESULTS Asymptotic extreme value theory originates in the work of R. A. Fisher,1 and the problem was fully solved by B. Gnedenko.2 4.2.1 Extreme value theorem The extreme value theorem states that under the hypothesis of independence and equal distribution of r.v.s X1 , X2 , . . . , Xn , if there are also two sequences of coefﬁcients αn > 0 1 Fisher R. A. and Tippett L. H. C., Limiting forms of the frequency distribution of the largest or smallest member of a sample, Proceedings of the Cambridge Philosophical Society, Vol. 24, 1978, pp. 180–90. 2 Gnedenko B. V., On the distribution limit of the maximum term of a random series, Annals of Mathematics, Vol. 44, 1943, pp. 423–53.

366

Asset and Risk Management

and βn (n = 1, 2, . . .) so that the limit (for n → ∞) of the random variable Yn =

max(X1 , . . . , Xn ) − βn αn

is not degenerate, it will admit a law of probability deﬁned by a distribution function that must be one of the following three forms: (z) = exp[−e−z ] 0 (z) = exp[−z−k ] exp[−(−z)k ] (z) = 1

z≤0 z>0 z 0 x→∞ 1 − FX (ux) 3

That is, the value that corresponds to the maximum of the probability density.

Extreme Value Theory

367

The laws covered by this description are the laws for which the tails decrease less rapidly than the exponential, such as Student’s law, Cauchy’s law and stable Pareto’s law. Finally, the attraction domain of Weibull’s law is characterised by the presence of a number x0 for which FX (x0 ) = 1 and FX (x) < 1 when x < x0 , and the presence of a positive parameter k, so that 1 − FX (x0 + ux) = uk x→0− 1 − FX (x0 + x) lim

∀u > 0

This category contains the bounded support distributions, such as the uniform law. 4.2.3 Generalisation A. F. Jenkinson has been able to provide Gnedenko’s result with a uniﬁed form. In fact, if for Fr´echet’s law it is suggested that z = 1 − τy and k = −1/τ , we will ﬁnd, when τ < 0 and we obtain exp[−z−k ] = exp[−(1 − τy)1/τ ] a valid relation for z > 0, that is, y > 1/τ (for the other values of y, the r.v. takes the value 0). In the same way, for Weibull’s law , it is suggested that z = τy − 1 and k = 1/τ . We then ﬁnd, when τ > 0 and we obtain exp[−(−z)k ] = exp[−(1 − τy)1/τ ] a valid relation for z < 0, that is, y < 1/τ (for the other values of y, the r.v. takes the value 1). We therefore have the same analytical expression in both cases. We will also see that the same applies to Gumbel’s law . By passage to the limit, we can easily ﬁnd: y n lim exp[−(1 − τy)1/τ ] = exp − lim 1 − = exp[−e−y ] n→±∞ τ →0± n which is the expression set out in Gumbel’s law. To sum up: by paring a(y) = exp[−(1 − τy)1/τ ], the d.f. FY of the extreme limit distribution is written as follows: 0 si y ≤ 1/τ (Fr´echet’s law). If t < 0, FY (y) = a(y) si y > 1/τ If t = 0, FY (y) = a(y) a(y) If t > 0, FY (y) = 1

∀y (Gumbel’s Law). si y < 1/τ si y ≥ 1/τ

This, of course, is the result shown in Section 7.4.2.

(Weibull’s law).

Appendix 5 Canonical Correlations 5.1 GEOMETRIC PRESENTATION OF THE METHOD The aim of canonical analysis 1 is to study the linear relations that exist between the static spreads and dynamic spreads observed on the same sample. We are looking for a linear combination of static spreads and a linear combination of dynamic spreads that are as well correlated as possible. We therefore have two sets of characters: x1 , x2 , . . . , xp on one hand and y1 , y2 , . . . , yq on the other hand. In addition, it is assumed that the characters are centred, standardized and observed for in the same number n of individuals. Both sets of characters generate the respective associated vectorial subspaces, V1 and V2 of R. We also introduce the matrices X and Y with respective formats (n, p) and (n, q), in which the various columns are observations relative to the different characters. As the characters are centred, the same will apply to the vectorial subspaces. Geometrically, therefore, the problem of canonical analysis can be presented as follows: we need to ﬁnd ξ ∈ εV1 and η ∈ V2 , so that cos2 (ξ, η) = r 2 (ξ, η) is maximised.

5.2 SEARCH FOR CANONICAL CHARACTERS Let us assume that the characters ξ 1 and η1 are solutions to the problem – see Figure A5.1. The angle between ξ 1 and η1 does not depend on their norm (length). In fact, V1 and V2 are invariable when the base vectors are multiplied by a scalar and therefore cos2 (ξ 1 , η1 ) does not depend on the base vector norms. It is then assumed that ||ξ 1 || = ||η1 || = 1. The character η1 must be co-linear with the orthogonal projection of ξ 1 over V2 , which is the vector of V2 that makes a minimum angle with ξ 1 . This condition is written as A2 ξ 1 = r1 η1 where r 2 1 = cos2 (ξ 1 , η1 ) and A2 is the operator of the orthogonal projection on V2 . In the same way, we have A1 η1 = r1 ξ 1 . These two relations produce the system

A1 A2 ξ 1 = λ1 ξ 1 A2 A1 η1 = λ1 η1

where λ1 = r 2 1 = cos2 (ξ 1 , η1 ). It is therefore deduced that ξ 1 and η1 are respectively the eigenvectors of operators A1 A2 and A2 A1 associated with the same highest eigenvalue λ1 , this value being equal 1 A detailed description of this method and other multivariate statistical methods, is found in Chatﬁeld C. and Collins A. J., Introduction to Multivariate Analysis, Chapman & Hall, 1980. Saporta G., Probabilities, Data Analysis and Statistics, Technip, 1990.

370

Asset and Risk Management x1

V1

A1h1

A2x1

h1 V2

Figure A5.1 Canonical correlations

to their squared cosine or their squared correlation. The characters ξ 1 and η1 are deduced from each other by a simple linear application: 1 1 η1 = √ A2 ξ 1 and ξ 1 = √ A1 η1 . λ1 λ1 The following canonical characters are the eigenvectors of A1 A2 , associated with the eigenvalue λ1 sorted in decreasing order. If the ﬁrst canonical characters of order i are written ξ i = a1 x1 + · · · + aP xP and ηi = b1 y1 + · · · + bq yq (in other words, in terms of matrix, ξ i = Xa and ηi = Y b) and if the diagonal matrix of the weights is expressed as D, it can be shown that: 1 b = √ (Y t DY )−1 (Xt DY )t a λi 1 a = √ (Xt DX)−1 (Xt DY )b λi

Appendix 6 Algebraic Presentation of Logistic Regression Let Y be the binary qualitative variable (0 for periods of equilibrium, 1 for breaks in equilibrium) that we wish to explain by the explanatory quantitative variables X1,p, . The model looks to evaluate the following probabilities: pi = Pr [Y = 1] X1 = xi1 ; . . . ; Xp = xip The logistic regression model1 is a nonlinear regression model. Here, the speciﬁcation for the model is based on the use of a logistic function: p G(p) = ln 1−p In this type of model, it is considered that there is linear dependency between G(pi ) and the explanatory variables: G(pi ) = β0 + β1 xi1 + · · · + βp xip where, β0 , β1 , . . . , βp are the unknown parameters to be estimated. By introducing the vector β for these coefﬁcients, so that 1 xi1 zi = . . . xip the binomial probability can be expressed in the form pi =

eβzi 1 + eβzi

The method for estimating the parameters is that of maximising the probability function through successive iterations. This probability function is the product of the statistical density relative to each individual member: L(β) =

{i:yi =1}

eβzi ·

1 1 + eβzi {i:y =0} i

1 A detailed description of this method and other multivariate statistical methods, is found in Chatﬁeld C. and Collins A. J., Introduction to Multivariate Analysis, Chapman & Hall, 1980. Saporta G., Probabilities, Data Analysis and Statistics, Technip, 1990.

Appendix 7 Time Series Models: ARCH-GARCH and EGARCH 7.1 ARCH-GARCH MODELS The ARCH-GARCH (auto-regressive conditional heteroscedasticity or generalised autoregressive conditional heteroscedasticity) models were developed by Engel1 in 1982 in the context of studies of macroeconomic data. The ARCH model allows speciﬁc modelling of variance in terms of error. Heteroscedasticity can be integrated by introducing an exogenous variable x, which provides for variance in the term of error. This modelling can take one of the following forms: yt = et · xt−1

yt = et · yt−1

or

Here, et is a white noise (sequence of r.v.s not correlated, with zero mean and the same variance). In order to prevent the variance in this geometric series from being inﬁnite or zero, it is preferable to take the following formulations: yt = a0 +

p

ai yt−i + εt

i=1

with:

E(εt ) = 0 var(εt ) = γ +

q

2 αi εt−i

i=1

This type of model is generally expressed as AR(p) − ARCH(q) or ARCH(p, q).

7.2 EGARCH MODELS These models, unlike the ARCH-GARCH model, allow the conditional variance to respond to a fall or rise in the series in different ways. This conﬁguration is of particular interest in generally increasing ﬁnancial series. An example of this type of model is Nelson’s:2 √ xt = µ + ht χt √ ln ht = α + β ln ht−1 + δ |χt | − 2/π + γ χt−1 Here, χt /It−1 follows a standard normal law (It−1 representing the information available at the moment t − 1). 1 Engel R. F., Auto-regressive conditional heteroscedasticity with estimate of the variance of United Kingdom inﬂation, Econometrica No. 50, 1982, pp. 987–1003. A detailed presentation of the chronological series models will also be found in Droebske J. J, Fichet B. and Tassi P., Mod´elisation ARCH, th´eorie statistique et applications dans le domaine de la ﬁnance, ´ Editions ULB, 1994; and in Gourjeroux C., Mod`eles ARCH et applications ﬁnanci`eres, Economica, 1992. 2 Nelson D. B., Conditional heteroscedasticity in asset returns: a new approach, Econometrica No. 39, 1991, pp. 347–70.

Appendix 8 Numerical Methods for Solving Nonlinear Equations1 An equation is said to be nonlinear when it involves terms of degree higher than 1 in the unknown quantity. These terms may be polynomial or capable of being broken down into Taylor series of degrees higher than 1. Nonlinear equations cannot in general be solved analytically. In this case, therefore, the solutions of the equations must be approached using iterative methods. The principle of these methods of solving consists in starting from an arbitrary point – the closest possible point to the solution sought – and involves arriving at the solution gradually through successive tests. The two criteria to take into account when choosing a method for solving nonlinear equations are: • Method convergence (conditions of convergence, speed of convergence etc.). • The cost of calculating of the method.

8.1 GENERAL PRINCIPLES FOR ITERATIVE METHODS 8.1.1 Convergence Any nonlinear equation f (x) = 0 can be expressed as x = g(x). If x0 constitutes the arbitrary starting point for the method, it will be seen that the solution x ∗ for this equation, x ∗ = g(x ∗ ), can be reached by the numerical sequence: xn+1 = g(xn )

n = 0, 1, 2, . . .

This iteration is termed a Picard process and x ∗ , the limit of the sequence, is termed the ﬁxed iterative point. In order for the sequence set out below to tend towards the solution of the equation, it has to be guaranteed that this sequence will converge. A sufﬁcient condition for convergence is supplied by the following theorem: if x = g(x) has a solution a within the interval I = [a − b; a + b] = {x : |x − a| ≤ b} and if g(x) satisﬁes Lipschitz’s condition: ∃L ∈ [0; 1[ : ∀x ∈ I,

|g(x) − g(a)| ≤ L|x − a|

Then, for every x0 ∈ I : • all the iterated values xn will belong to I; • the iterated values xn will converge towards a; • the solution a will be unique within interval I . 1 This appendix is mostly based on Litt F. X., Analyse num´erique, premi`ere partie, ULG, 1999. Interested readers should also read: Burden R. L. and Faires D. J., Numerical Analysis, Prindle, Weber & Schmidt, 1981; and Nougier J. P., M´ethodes de calcul num´erique, Masson, 1993.

376

Asset and Risk Management

We should also show a case in which Lipschitz’s condition is satisﬁed: it is sufﬁcient that for every x ∈ I , g (x) exists and is such that |g (x)| ≤ m with m < 1. 8.1.2 Order of convergence It is important to choose the most suitable of the methods that converge. At this level, one of the most important criteria to take into account is the speed or order of convergence. Thus the sequence xn , deﬁned above, and the error en = xn − a. If there is a number p and a constant C > 0 so that lim

n→∞

|en+1 | =C |en |p

p will then be termed the order of convergence for the sequence and C is the asymptotic error constant. When the speed of convergence is unsatisfactory, it can be improved by the Aitken extrapolation,2 which is a convergence acceleration process. The speed of convergence of this extrapolation is governed by the following result: • If Picard’s iterative method is of the order p, the Aitken extrapolation will be of the order 2p − 1. • If Picard’s iterative method is of the ﬁrst order, Aitken’s extrapolation will be of the second order in the case of a simple solution and of the ﬁrst order in the case of a multiple solution. In this last case, the asymptotic error constant is equal to 1 − 1/m where m is the multiplicity of the solution. 8.1.3 Stop criteria As stated above, the iterative methods for solving nonlinear equations supply an approached solution to the solution of the equation. It is therefore essential to be able to estimate the error in the solution. Working on the mean theorem: f (xn ) = (xn − a)f (ξ ), with ξ ∈ [xn ; a] we can deduce the following estimation for the error: |xn − a| ≤

|f (xn )| , M

|f (xn )| ≥ M,

x ∈ [xn ; a]

In addition, the rounding error inherent in every numerical method limits the accuracy of the iterative methods to: δ εa = f (a) 2

We refer to Litt F. X., Analyse num´erique, premi`ere partie, ULG 1999, for further details.

Numerical Methods for Solving Nonlinear Equations

377

in which δ represents an upper boundary for the rounding error in iteration n: δ ≥ |δn | = f (xn ) − f (xn ) f (xn ) represents the calculated value for the function. Let us now assume that we wish to determine a solution a with a degree of precision ε. We could stop the iterative process on the basis of the error estimation formula. These formulae, however, require a certain level of information on the derivative f (x), information that is not easy to obtain. On the other hand, the limit speciﬁcation εa will not generally be known beforehand.3 Consequently, we are running the risk of ε, the accuracy level sought, never being reached, as it is better than the limit precision εa (ε < εa ). In this case, the iterative process will carry on indeﬁnitely. This leads us to accept the following stop criterion:

|xn − xn−1 | < ε |xn+1 − xn | ≥ |xn − xn−1 |

This means that the iteration process will be stopped when the iteration n produces a variation in value less than that of the iteration n + 1. The value of ε will be chosen in a way that prevents the iteration from stopping too soon.

8.2 PRINCIPAL METHODS Deﬁning an iterative method is based ultimately on deﬁning the function h(x) of the equation x = g(x) ≡ x − h(x)f (x). The choice of this function will determine the order of the method. 8.2.1 First order methods The simplest choice consists of taking h(x) = m = constant = 0. 8.2.1.1 Chord method This deﬁnes the chord method (Figure A8.1), for which the iteration is xn+1 = xn − mf (xn ).

y = f(x)

y = x/m x1 x2

x0

Figure A8.1 Chord method 3

This will in effect require knowledge of f (a), when a is exactly what is being sought.

378

Asset and Risk Management y = f(x)

x0

x2 x1

Figure A8.2 Classic chord method

The sufﬁcient convergence condition (see Section A8.1.1) for this method is 0 < mf (x) < 2, in the neighbourhood of the solution. In addition, it can be shown that |en+1 | limn→∞ = |g (a)| = 0. |en | The chord method is therefore clearly a ﬁrst-order method (see Section A8.1.2). 8.2.1.2 Classic chord method It is possible to improve the order of convergence by making m change at each iteration: xn+1 = xn − mn f (xn ) The classic chord method (Figure A8.2) takes as the value for mn the inverse of the slope for the straight line deﬁned by the points (xn−1 ; f (xn−1 )) and (xn ; f (xn )): xn+1 = xn −

xn − xn−1 f (xn ) f (xn ) − f (xn−1 )

This method will converge if f (a) = 0 and f (x) is continuous in the neighbourhood of a. In addition, it can be shown that |en+1 | = lim n→∞ |en |p for p = 12 (1 + the method.

√

f (a) 2f (a)

1/p = 0

5) = 1.618 . . . > 1, which greatly improves the order of convergence for

8.2.1.3 Regula falsi method The regula falsi method (Figure A8.3) takes as the value for mn the inverse of the slope for the straight line deﬁned by the points (xn ; f (xn )) and (xn ; f (xn )) where n is the highest index for which f (xn ).f (xn ) < 0: xn+1 = xn −

xn − xn f (xn ) f (xn ) − f (xn )

Numerical Methods for Solving Nonlinear Equations

379

y = f (x)

x2

x1

x0

Figure A8.3 Regula falsi method

This method always converges when f (x) is continuous. On the other hand, the convergence of this method is linear and therefore less effective than the convergence of the classic chord method. 8.2.2 Newton–Raphson method If, in the classic chord method, we choose mn so that g (xn ) = 0, that is, f (xn ) = 1/mn , we will obtain a second-order iteration. The method thus deﬁned, f (xn ) xn+1 = xn − f (xn ) is known as the Newton-Raphson method (Figure A8.4). It is clearly a second-order method, as 1 f (a) |en+1 | lim = = 0 n→∞ |en |2 2 f (a) The Newton–Raphson method is therefore rapid insofar as the initial iterated value is not too far from the solution sought, as global convergence is not assured at all. A convergence criterion is therefore given for the following theorem. Assume that f (x) = 0 and that f (x) does not change its sign within the interval [a; b] and f (a).f (b) < 0. If, furthermore, f (a) < b − a and f (b) < b − a f (b) f (a) the Newton–Raphson method will converge at every initial arbitrary point x0 that belongs to [a; b]. y = f (x) f(x1)/f ′(x1)

x0

x2 f(x0)/f′(x0)

Figure 8.4 Newton–Raphson method

x1

380

Asset and Risk Management

The classic chord method, unlike the Newton–Raphson method, requires two initial approximations but only involves one new function evaluation at each subsequent stage. The choice between the classic chord method and the Newton–Raphson method will therefore depend on the effort of calculation required for evaluation f (x). Let us assume that the effort of calculation required for evaluation of f (x) is θ times the prior effort of calculation for f (x). Given what has been said above, we can establish that the effort of calculation will be the same for the two methods if: √ 1+θ 1 1+ 5 = in which p = log 2 log p 2 is the order of convergence in the classic chord method. In consequence: • If θ > (log 2/ log p) − 1 ∼ 0.44 → the classic chord method will be used. • If θ ≤ (log 2/ log p) − 1 ∼ 0.44 → the Newton–Raphson method will be used. 8.2.3 Bisection method The bisection method is a linear convergence method and is therefore slow. Use of the method is, however, justiﬁed by the fact that it converges overall, unlike the usual methods (especially the Newton–Raphson and classic chord methods). This method will therefore be used to bring the initial iterated value of the Newton–Raphson or classic chord method to a point sufﬁciently close to the solution to ensure that the methods in question converge. Let us assume therefore that f (x) is continuous in the interval [a0 ; b0 ] and such that4 f (a0 ).f (b0 ) < 0. The principle of the method consists of putting together a converging sequence of bracketed intervals, [a1 ; b1 ] ⊃ [a2 ; b2 ] ⊃ [a3 ; b3 ] ⊃ . . . , all of which contain a solution of the equation f (x) = 0. If it is assumed that5 f (a0 ) < 0 and f (b0 ) > 0, the intervals Ik = [ak ; bk ] will be put together by recurrence on the basis of Ik−1 . [mk ; bk−1 ] if f (mk ) < 0 [ak ; bk ] = [ak−1 ; mk ] if f (mk ) > 0 Here, mk = (ak−1 + bk−1 )/2. One is thus assured that f (ak ) < 0 and f (bk ) > 0, which guarantees convergence. The bisection method is not a Picard iteration, but the order of convergence can be deter|en+1 | 1 mined, as limn→∞ = . The bisection method is therefore a ﬁrst-order method. |en | 2

8.3 NONLINEAR EQUATION SYSTEMS We have a system of n nonlinear equations of n unknowns: fi (x1 , x2 , . . . , xn ) = 0 i = 1, 2, . . . , n. Here, in vectorial notation, f (x) = 0. The solution to the system is an n-dimensional vector a. 4 5

This implies that f (x) has a root within this interval. This is not restrictive in any way, as it corresponds to f (x) = 0 or −f (x) = 0, x ∈ [a0 ; b0 ], depending on the case.

Numerical Methods for Solving Nonlinear Equations

381

8.3.1 General theory of n-dimensional iteration n-dimensional iteration general theory is similar to the one-dimensional theory. The above equation can thus be expressed in the form: x = g(x) ≡ x − A(x)f (x) where A is a square matrix of nth order. Picard’s iteration is always deﬁned as xk+1 = g(xk )

k = 0, 1, 2 etc.

and the convergence theorem for Picard’s iteration remains valid in n dimensions. In addition, if the Jacobian matrix J(x), deﬁned by [J(x)]ij = gj (x) xi is such that for every x ∈ I , ||J(x)|| ≤ m for a norm compatible with m < 1, Lipschitz’s condition is satisﬁed. The order of convergence is deﬁned by lim

k→∞

||ek+1 || =C ||ek ||p

where C is the constant for the asymptotic error. 8.3.2 Principal methods If one chooses a constant matrix A as the value for A(x), the iterative process is the generalisation in n dimensions of the chord method. If the inverse of the Jacobian matrix of f is chosen as the value of A(x), we will obtain the generalisation in n dimensions of the Newton–Raphson method. Another approach to solving the equation f (x) = 0 involves using the i th equation to determine the (i + 1)th component. Therefore, for i = 1, 2, . . . , n, the following equations will be solved in succession: (k+1) (k) fi (x1(k+1) , . . . , xi−1 , xi , xi+1 , . . . , xn(k) ) = 0

with respect to xi . This is known as the nonlinear Gauss–Seidel method.

Bibliography CHAPTER 1 The Bank for International Settlements, Basle Committee for Banking Controls, Sound Practices for the Management and Supervision of Operational Risk, Basle, February 2003. The Bank for International Settlements, Basle Committee for Banking Controls, The New Basle Capital Accord, Basle, January 2001. The Bank for International Settlements, Basle Committee for Banking Controls, The New Basle Capital Accord: An Explanatory Note, Basle, January 2001. The Bank for International Settlements, Basle Committee for Banking Controls, Vue d’ensemble du Nouvel accord de Bale ˆ sur les fonds propres, Basle, January 2001. Cruz M. G., Modelling, Measuring and Hedging Operational Risk, John Wiley & Sons, Ltd, 2002. Hoffman D. G., Managing Operational Risk: 20 Firm-Wide Best Practice Strategies, John Wiley & Sons, Inc, 2002. Jorion P., Financial Risk Manager Handbook (Second Edition), John Wiley & Sons, Inc, 2003. Marshall C., Measuring and Managing Operational Risks in Financial Institutions, John Wiley & Sons, Inc, 2001.

CHAPTER 2 The Bank for International Settlements, BIS Quarterly Review, Collateral in Wholesale Financial Markets, Basle, September 2001. The Bank for International Settlements, Basle Committee for Banking Controls, Internal Audit in Banks and the Supervisor’s Relationship with Auditors, Basle, August 2001. The Bank for International Settlements, Basle Committee for Banking Controls, Sound Practices for Managing Liquidity in Banking Organisations, Basle, February 2000. The Bank for International Settlements, Committee on the Global Financial System, Collateral in Wholesale Financial Markets: Recent Trends, Risk Management and Market Dynamics, Basle, March 2001. Moody’s, Moody’s Analytical Framework for Operational Risk Management of Banks, Moody’s, January 2003.

CHAPTER 3 Bachelier L., Th´eorie de la sp´eculation, Gauthier-Villars, 1900. Bechu T. and Bertrand E., L’Analyse Technique, Economica, 1998. Binmore K., Jeux et th´eorie des jeux, De Boeck & Larcier, 1999. Brealey R. A. and Myers S. C., Principles of Corporate Finance, McGraw-Hill, 1991. Broquet C., Cobbaut R., Gillet R., and Vandenberg A., Gestion de Portefeuille, De Boeck, 1997. Chen N. F., Roll R., and Ross S. A., Economic forces of the stock market, Journal of Business, No. 59, 1986, pp. 383–403. Copeland T. E. and Weston J. F., Financial Theory and Corporate Policy, Addison-Wesley, 1988.

384

Bibliography

´ Devolder P., Finance stochastique, Editions ULB, 1993. Dhrymes P. J., Friend I., and Gultekin N. B., A critical re-examination of the empirical evidence on the arbitrage pricing theory, Journal of Finance, No. 39, 1984, pp. 323–46. Eeckhoudt L. and Gollier C., Risk, Harvester Wheatsheaf, 1995. Elton E. and Gruber M., Modern Portfolio Theory and Investment Analysis, John Wiley & Sons, Inc, 1991. Elton E., Gruber M., and Padberg M., Optimal portfolios from single ranking devices, Journal of Portfolio Management, Vol. 4, No. 3, 1978, pp. 15–19. Elton E., Gruber M., and Padberg M., Simple criteria for optimal portfolio selection, Journal of Finance, Vol. XI, No. 5, 1976, pp. 1341–57. Elton E., Gruber M., and Padberg M., Simple criteria for optimal portfolio selection: tracing out the efﬁcient frontier, Journal of Finance, Vol. XIII, No. 1, 1978, pp. 296–302. Elton E., Gruber M., and Padberg M., Simple criteria for optimal portfolio selection with upper bounds, Operation Research, 1978. Fama E. and Macbeth J., Risk, return and equilibrium: empirical tests, Journal of Political Economy, Vol. 71, No. 1., 1974, pp. 607–36. Fama E. F., Behaviour of stock market prices, Journal of Business, Vol. 38, 1965, pp. 34–105. Fama E. F., Efﬁcient capital markets: a review of theory and empirical work, Journal of Finance, Vol. 25, 1970. Fama E. F., Random walks in stock market prices, Financial Analysis Journal, 1965. Gillet P., L’efﬁcience des march´es ﬁnanciers, Economica, 1999. Gordon M. and Shapiro E., Capital equipment analysis: the required rate proﬁt, Management Science, Vol. 3, October 1956. Grinold C. and Kahn N., Active Portfolio Management, McGraw-Hill, 1998. Lintner J., The valuation of risky assets and the selection of risky investments, Review of Economics and Statistics, Vol. 47, 1965, pp. 13–37. Markowitz H., Mean-Variance Analysis in Portfolio Choice and Capital Markets, Blackwell Publishers, 1987. Markowitz H., Portfolio selection, Journal of Finance, Vol. 7, No. 1, 1952, pp. 419–33. Mehta M. L., Random Matrices, Academic Press, 1996. Miller M. H. and Modigliani F., Dividend policy, growth and the valuation of shares, Journal of Business, 1961. Morrison D., Multivariate Statistical Methods, McGraw-Hill, 1976. Roger P., L’´evaluation des Actifs Financiers, de Boeck, 1996. Ross S. A., The arbitrage theory of capital asset pricing, Journal of Economic Theory, 1976, pp. 343–62. Samuelson P., Mathematics on Speculative Price, SIAM Review, Vol. 15, No. 1, 1973. Saporta G., Probabilit´es, Analyse des Donn´ees et Statistique, Technip, 1990. Sharpe W., A simpliﬁed model for portfolio analysis, Management Science, Vol. 9, No. 1, 1963, pp. 277–93. Sharpe W., Capital asset prices, Journal of Finance, Vol. 19, 1964, pp. 425–42. Von Neumann J. and Morgenstern O., Theory of Games and Economic Behaviour, Princeton University Press, 1947.

CHAPTER 4 Bierwag G., Kaufmann G., and Toevs A (Eds.), Innovations in Bond Portfolio Management: Duration Analysis and Immunisation, JAI Press, 1983. Bisi`ere C., La Structure par Terme des Taux d’int´erˆet, Presses Universitaires de France, 1997. Brennan M. and Schwartz E., A continuous time approach to the pricing of bonds, Journal of Banking and Finance, Vol. 3, No. 2, 1979, pp. 133–55. Colmant B., Delfosse V., and Esch L., Obligations, les notions ﬁnanci`eres essentielles, Larcier, 2002. Cox J., Ingersoll J., and Ross J., A theory of the term structure of interest rates, Econometrica, Vol. 53, No. 2, 1985, pp. 385–406. Fabozzi J. F., Bond Markets, Analysis and Strategies, Prentice-Hall, 2000.

Bibliography

385

Heath D., Jarrow R., and Morton A., Bond Pricing and the Term Structure of Interest Rates: a New Methodology, Cornell University, 1987. Heath D., Jarrow R., and Morton A., Bond pricing and the term structure of interest rates: discrete time approximation, Journal of Financial and Quantitative Analysis, Vol. 25, 1990, pp. 419–40. Ho T. and Lee S., Term structure movement and pricing interest rate contingent claims, Journal of Finance, Vol. 41, No. 5, 1986, pp. 1011–29. Macauley F., Some Theoretical Problems Suggested by the Movements of Interest Rates, Bond Yields and Stock Prices in the United States since 1856, New York, National Bureau of Economic Research, 1938, pp. 44–53. Merton R., Theory of rational option pricing, Bell Journal of Economics and Management Science, Vol. 4, No. 1, 1973, pp. 141–83. Ramaswamy K. and Sundaresan M., The valuation of ﬂoating-rates instruments: theory and evidence, Journal of Financial Economics, Vol. 17, No. 2, 1986, pp. 251–72. Richard S., An arbitrage model of the term structure of interest rates, Journal of Financial Economics, Vol. 6, No. 1, 1978, pp. 33–57. Schaefer S. and Schwartz E., A two-factor model of the term structure: an approximate analytical solution, Journal of Financial and Quantitative Analysis, Vol. 19, No. 4, 1984, pp. 413–24. Vasicek O., An equilibrium characterisation of the term structure, Journal of Financial Economics, Vol. 5, No. 2, 1977, pp. 177–88.

CHAPTER 5 Black F. and Scholes M., The pricing of options and corporate liabilities, Journal of Political Economy, Vol. 81, 1973, pp. 637–59. Colmant B. and Kleynen G., Gestion du risque de taux d’int´eret ˆ et instruments ﬁnanciers d´eriv´es, Kluwer 1995. Copeland T. E. and Wreston J. F., Financial Theory and Corporate Policy, Addison-Wesley, 1988. Courtadon G., The pricing of options on default-free bonds, Journal of Financial and Quantitative Analysis, Vol. 17, 1982, pp. 75–100. Cox J., Ross S., and Rubinstein M., Option pricing: a simpliﬁed approach, Journal of Financial Economics, No. 7, 1979, pp. 229–63. ´ Devolder P., Finance stochastique, Editions ULB, 1993. Garman M. and Kohlhagen S., Foreign currency option values, Journal of International Money and Finance, No. 2, 1983, pp. 231–7. Hicks A., Foreign Exchange Options, Woodhead, 1993. Hull J. C., Options, Futures and Others Derivatives, Prentice Hall, 1997. Krasnov M., Kisselev A., Makarenko G., and Chikin E., Math`ematiques sup´erieures pour ing´enieurs et polytechniciens, De Boeck, 1993. Reilly F. K. and Brown K. C., Investment Analysis and Portfolio Management, South-Western, 2000. Rubinstein M., Options for the undecided, in From Black–Scholes to Black Holes, Risk Magazine, 1992. Sokolnikoff I. S. and Redheffer R. M., Mathematics of Physics and Modern Engineering, McGrawHill, 1966.

CHAPTER 6 Blattberg R. and Gonedes N., A comparison of stable and Student descriptions as statistical models for stock prices, Journal of Business, Vol. 47, 1974, pp. 244–80. Fama E., Behaviour of stock market prices, Journal of Business, Vol. 38, 1965, pp. 34–105. Johnson N. L. and Kotz S., Continuous Univariate Distribution, John Wiley & Sons, Inc, 1970. Jorion P., Value at Risk, McGraw-Hill, 2001. Pearson E. S. and Hartley H. O., Biometrika Tables for Students, Biometrika Trust, 1976.

CHAPTER 7 Abramowitz M. and Stegun A., Handbook of Mathematical Functions, Dover, 1972. Chase Manhattan Bank NA, The Management of Financial Price Risk, Chase Manhattan Bank NA, 1995.

386

Bibliography

Chase Manhattan Bank NA, Value at Risk, its Measurement and Uses, Chase Manhattan Bank NA, undated. Chase Manhattan Bank NA, Value at Risk, Chase Manhattan Bank NA, 1996. Danielsson J. and De Vries C., Beyond the Sample: Extreme Quantile and Probability Estimation, Mimeo, Iceland University and Tinbergen Institute Rotterdam, 1997. Danielsson J. and De Vries C., Tail index and quantile estimation with very high frequency data, Journal of Empirical Finance, No. 4, 1997, pp. 241–57. Danielsson J. and De Vries C., Value at Risk and Extreme Returns, LSE Financial Markets Group Discussion Paper 273, London School of Economics, 1997. Embrechts P. Kl¨uppelberg C., and Mikosch T., Modelling External Events for Insurance and Finance, Springer Verlag, 1999. Galambos J., Advanced Probability Theory, M. Dekker, 1988, Section 6.5. Gilchrist W. G., Statistical Modelling with Quantile Functions, Chapman & Hall/CRC, 2000. Gnedenko B. V., On the limit distribution of the maximum term in a random series, Annals of Mathematics, Vol. 44, 1943, pp. 423–53. Gourieroux C., Mod`eles ARCH et applications ﬁnanci`eres, Economica, 1992. Gumbel E. J., Statistics of Extremes, Columbia University Press, 1958. Hendricks D., Evaluation of Value at Risk Models using Historical Data, FRBNY Policy Review, 1996, pp. 39–69. Hill B. M., A simple general approach to inference about the tail of a distribution, Annals of Statistics, Vol. 46, 1975, pp. 1163–73. Hill I. D., Hill R., and Holder R. L, Fitting Johnson curves by moments (Algorithm AS 99), Applied Statistics, Vol. 25, No. 2, 1976, pp. 180–9. Jenkinson A. F., The frequency distribution of the annual maximum (or minimum) values of meteorological elements, Quarterly Journal of the Royal Meteorological Society, Vol. 87, 1955, pp. 145–58. Johnson N. L., Systems of frequency curves generated by methods of translation, Biometrika, Vol. 36, 1949, pp. 1498–575. Longin F. M., From value at risk to stress testing: the extreme value approach, Journal of Banking and Finance, No. 24, 2000, pp. 1097–130. Longin F. M., Extreme Value Theory: Introduction and First Applications in Finance, Journal de la Soci´et´e Statistique de Paris, Vol. 136, 1995, pp. 77–97. Longin F. M., The asymptotic distribution of extreme stock market returns, Journal of Business, No. 69, 1996, pp. 383–408. McNeil A. J., Estimating the Tails of Loss Severity Distributions using Extreme Value Theory, Mimeo, ETH Zentrum Zurich, 1996. McNeil A. J., Extreme value theory for risk managers, in Internal Modelling and CAD II, Risk Publications, 1999, pp. 93–113. Mina J. and Yi Xiao J., Return to RiskMetrics: The Evolution of a Standard, RiskMetrics, 2001. Morgan J. P., RiskMetrics: Technical Document, 4th Ed., Morgan Guaranty Trust Company, 1996. Pickands J., Statistical inference using extreme order statistics, Annals of Statistics, Vol. 45, 1975, pp. 119–31. Reiss R. D. and Thomas M., Statistical Analysis of Extreme Values, Birkhauser Verlag, 2001. Rouvinez C., Going Greek with VAR, Risk Magazine, February 1997, pp. 57–65. Schaller P., On Cash Flow Mapping in VAR Estimation, Creditanstalt-Bankverein, CA RISC199602237, 1996. Stambaugh V., Value at Risk, not published, 1996. Vose D., Quantitative Risk Analysis, John Wiley & Sons, Ltd, 1996.

CHAPTER 9 Lopez T., D´elimiter le risque de portefeuille, Banque Magazine, No. 605, July–August 1999, pp. 44–6.

CHAPTER 10 Broquet C., Cobbaut R., Gillet R., and Vandenberg A., Gestion de Portefeuille, De Boeck, 1997. Burden R. L. and Faires D. J., Numerical Analysis, Prindle, Weber & Schmidt, 1981.

Bibliography

387

Esch L., Kieffer R., and Lopez T., Value at Risk – Vers un risk management moderne, De Boeck, 1997. Litt F. X., Analyse num´erique, premi`ere partie, ULG, 1999. Markowitz H., Mean-Variance Analysis in Portfolio Choice and Capital Markets, Basil Blackwell, 1987. Markowitz H., Portfolio Selection: Efﬁcient Diversiﬁcation of Investments, Blackwell Publishers, 1991. Markowitz H., Portfolio selection, Journal of Finance, Vol. 7, No. 1, 1952, pp. 77–91. Nougier J-P., M´ethodes de calcul num´erique, Masson, 1993. Vauthey P., Une approche empirique de l’optimisation de portefeuille, Eds. Universitaires Fribourg Suisse, 1990.

CHAPTER 11 Chen N. F., Roll R., and Ross S. A., Economic forces of the stock market, Journal of Business, No. 59, 1986, pp. 383–403. Dhrymes P. J., Friends I., and Gultekin N. B., A critical re-examination of the empirical evidence on the arbitrage pricing theory, Journal of Finance, No. 39, 1984, pp. 323–46. Ross S. A., The arbitrage theory of capital asset pricing, Journal of Economic Theory, 1976, pp. 343–62.

CHAPTER 12 Ausubel L., The failure of competition in the credit card market, American Economic Review, vol. 81, 1991, pp. 50–81. Cooley W. W. and Lohnes P. R., Multivariate Data Analysis, John Wiley & Sons, Inc, 1971. Damel P., La mod´elisation des contrats bancaires a` taux r´evisable: une approche utilisant les corr´elations canoniques, Banque et March´es, mars avril, 1999. Damel P., L’apport de replicating portfolio ou portefeuille r´epliqu´e en ALM: m´ethode contrat par contrat ou par la valeur optimale, Banque et March´es, mars avril, 2001. Heath D., Jarrow R., and Morton A., Bond pricing and the term structure of interest rates: a new methodology for contingent claims valuation, Econometrica, vol. 60, 1992, pp. 77–105. Hotelling H., Relation between two sets of variables, Biometrica, vol. 28, 1936, pp. 321–77. Hull J. and White A., Pricing interest rate derivative securities, Review of Financial Studies, vols 3 & 4, 1990, pp. 573–92. Hutchinson D. and Pennachi G., Measuring rents and interest rate risk in imperfect ﬁnancial markets: the case of retail bank deposit, Journal of Financial and Quantitative Analysis, vol. 31, 1996, pp. 399–417. Mardia K. V., Kent J. T., and Bibby J. M., Multivariate Analysis, Academic Press, 1979. Sanyal A., A Continuous Time Monte Carlo Implementation of the Hull and White One Factor Model and the Pricing of Core Deposit, unpublished manuscript, December 1997. Selvaggio R., Using the OAS methodology to value and hedge commercial bank retail demand deposit premiums, The Handbook of Asset/Liability Management, Edited by F. J. Fabozzi & A. Konishi, McGraw-Hill, 1996. Smithson C., A Lego approach to ﬁnancial engineering in the Handbook of Currency and Interest Rate Risk Management, Edited by R. Schwartz & C. W. Smith Jr., New York Institute of Finance, 1990. Tatsuoka M. M., Multivariate Analysis, John Wiley & Sons, Ltd, 1971. Wilson T., Optimal value: portfolio theory, Balance Sheet, Vol. 3, No. 3, Autumn 1994.

APPENDIX 1 Bair J., Math´ematiques g´en´erales, De Boeck, 1990. Esch L., Math´ematique pour e´ conomistes et gestionnaires (2nd Edition), De Boeck, 1999. Guerrien B., Alg`ebre lin´eaire pour e´ conomistes, Economica, 1982. Ortega J. M., Matrix Theory, Plenum, 1987. Weber J. E., Mathematical Analysis (Business and Economic Applications), Harper and Row, 1982.

APPENDIX 2 Baxter M. and Rennie A., Financial Calculus, Cambridge University Press, 1996.

388

Bibliography

Feller W., An Introduction to Probability Theory and its Applications (2 volumes), John Wiley & Sons, Inc, 1968. Grimmett G. and Stirzaker D., Probability and Random Processes, Oxford University Press, 1992. Kendall M. and Stuart A., The Advanced Theory of Statistics (3 volumes), Grifﬁn, 1977. Loeve M., Probability Theory (2 volumes), Springer-Verlag, 1977. Roger P., Les outils de la mod´elisation ﬁnanci`ere, Presses Universitaires de France, 1991. Ross S. M., Initiation aux probabiliti´es, Presses Polytechniques et Universitaires Romandes, 1994.

APPENDIX 3 Ansion G., Econom´etrie pour l’enterprise, Eyrolles, 1988. Dagnelie P., Th´eorie et m´ethodes statistique (2 volumes), Presses Agronomiques de Gembloux, 1975. Johnston J., Econometric Methods, McGraw-Hill, 1972. Justens D., Statistique pour d´ecideurs, De Boeck, 1988. Kendall M. and Stuart A., The Advanced Theory of Statistics (3 volumes), Grifﬁn, 1977.

APPENDIX 4 Fisher R. A. and Tippett L. H. C., Limiting forms of the frequency distribution of the largest or smallest member of a sample, Proceedings of the Cambridge Philosophical Society, Vol. 24, 1928, pp. 180–90. Gnedenko B. V., On the limit distribution for the maximum term of a random series, Annals of Mathematics, Vol. 44, 1943, pp. 423–53. Jenkinson A. F., The frequency distribution of the annual maximum (or minimum) values of meteorological elements, Quarterly Journal of the Royal Meteorological Society, Vol. 87, 1955, pp. 145–58.

APPENDIX 5 Chatﬁeld C. and Collins A. J., Introduction to Multivariate Analysis, Chapman & Hall, 1980. Saporta G., Probabilit´es, Analyse des Donn´ees et Statistique, Technip, 1990.

APPENDIX 6 Chatﬁeld C. and Collins A. J., Introduction to Multivariate Analysis, Chapman & Hall, 1980. Saporta G., Probabilit´es, Analyse des Donn´ees et Statistique, Technip, 1990.

APPENDIX 7 Droesbeke J. J., Fichet B., and Tassi P, Mod´elisation ARCH, th´erine statistique et applications dans ´ le domaine de la ﬁnance, Editions ULB, 1994. Engel R. F., Auto-regressive conditional heteroscedasticity with estimate of the variance of United Kingdom inﬂation, Econometrica, No. 50, 1982, pp. 987–1003. Gourieroux C., Mod`eles ARCH et applications ﬁnanci`eres, Economica, 1992. Nelson D. B., Conditional heteroscedasticity in asset returns: a new approach, Econometrica, No. 39, 1991, pp. 347–70.

APPENDIX 8 Burden R. L. and Faires D. J., Numerical Analysis, Prindle, Weber & Schmidt, 1981. Litt F. X., Analyse num´erique, premi`ere partie, ULG, 1999. Nougier J-P., M´ethods de calcul num´erique, Masson, 1993.

INTERNET SITES http://www.aptltd.com http://www.bis.org/index.htm http://www.cga-canada.org/fr/magazine/nov-dec02/Cyberguide f.htm http://www.fasb.org http://www.iasc.org.uk/cmt/0001.asp http://www.ifac.org http://www.prim.lu

Index absolute global risk 285 absolute risk aversion coefﬁcient 88 accounting standards 9–10 accrued interest 118–19 actuarial output rate on issue 116–17 actuarial return rate at given moment 117 adjustment tests 361 Aitken extrapolation 376 Akaike’s information criterion (AIC) 319 allocation independent allocation 288 joint allocation 289 of performance level 289–90 of systematic risk 288–9 American option 149 American pull 158–9 arbitrage 31 arbitrage models 138–9 with state variable 139–42 arbitrage pricing theory (APT) 97–8, 99 absolute global risk 285 analysis of style 291–2 beta 290, 291 factor-sensitivity proﬁle 285 model 256, 285–94 relative global risk/tracking error 285–7 ARCH 320 ARCH-GARCH models 373 arithmetical mean 36–7 ARMA models 318–20 asset allocation 104, 274 asset liability management replicating portfolios 311–21 repricing schedules 301–11 simulations 300–1 structural risk analysis in 295–9 VaR in 301 autocorrelation test 46

autoregressive integrated moving average 320 autoregressive moving average (ARMA) 318 average deviation 41 bank offered rate (BOR) 305 basis point 127 Basle Committee for Banking Controls 4 Basle Committee on Banking Supervision 3–9 Basle II 5–9 Bayesian information criterion (BIC) 319 bear money spread 177 benchmark abacus 287–8 Bernouilli scheme 350 Best Linear Unbiased Estimators (BLUE) 363 beta APT 290, 291 portfolio 92 bijection 335 binomial distribution 350–1 binomial formula (Newton’s) 111, 351 binomial law of probability 165 binomial trees 110, 174 binomial trellis for underlying equity 162 bisection method 380 Black and Scholes model 33, 155, 174, 226, 228, 239 for call option 169 dividends and 173 for options on equities 168–73 sensitivity parameters 172–3 BLUE (Best Linear Unbiased Estimators) 363 bond portfolio management strategies 135–8 active strategy 137–8 duration and convexity of portfolio 135–6 immunizing a portfolio 136–7 positive strategy: immunisation 135–7 bonds average instant return on 140

390

Index

bonds (continued ) deﬁnition 115–16 ﬁnancial risk and 120–9 price 115 price approximation 126 return on 116–19 sources of risk 119–21 valuing 119 bootstrap method 233 Brennan and Schwarz model 139 building approach 316 bull money spread 177 business continuity plan (BCP) 14 insurance and 15–16 operational risk and 16 origin, deﬁnition and objective 14 butterﬂy money spread 177 calendar spread 177 call-associated bonds 120 call option 149, 151, 152 intrinsic value 153 premium breakdown 154 call–put parity relation 166 for European options 157–8 canonical analysis 369 canonical correlation analysis 307–9, 369–70 capital asset pricing model (CAPM or MEDAF) 93–8 equation 95–7, 100, 107, 181 cash 18 catastrophe scenarios 20, 32, 184, 227 Cauchy’s law 367 central limit theorem (CLT) 41, 183, 223, 348–9 Charisma 224 Chase Manhattan 224, 228 Choleski decomposition method 239 Choleski factorisation 220, 222, 336–7 chooser option 176 chord method 377–8 classic chord method 378 clean price 118 collateral management 18–19 compliance 24 compliance tests 361 compound Poisson process 355 conditional normality 203 conﬁdence coefﬁcient 360 conﬁdence interval 360–1 continuous models 30, 108–9, 111–13, 131–2, 134 continuous random variables 341–2 contract-by-contract 314–16

convergence 375–6 convertible bonds 116 convexity 33, 149, 181 of a bond 127–9 corner portfolio 64 correlation 41–2, 346–7 counterparty 23 coupon (nominal) rate 116 coupons 115 covariance 41–2, 346–7 cover law of probability 164 Cox, Ingersoll and Ross model 139, 145–7, 174 Cox, Ross and Rubinstein binomial model 162–8 dividends and 168 one period 163–4 T periods 165–6 two periods 164–5 credit risk 12, 259 critical line algorithm 68–9 debentures 18 decision channels 104, 105 default risk 120 deﬁcit constraint 90 degenerate random variable 341 delta 156, 181, 183 delta hedging 157, 172 derivatives 325–7 calculations 325–6 deﬁnition 325 extrema 326–7 geometric interpretations 325 determinist models 108–9 generalisation 109 stochastic model and 134–5 deterministic structure of interest rates 129–35 development models 30 diagonal model 70 direct costs 26 dirty price 118 discrete models 30, 108, 109–11. 130, 132–4 discrete random variables 340–1 dispersion index 26 distortion models 138 dividend discount model 104, 107–8 duration 33, 122–7, 149 and characteristics of a bond 124 deﬁnition 121 extension of concept of 148 interpretations 121–3 of equity funds 299 of speciﬁc bonds 123–4

Index dynamic interest-rate structure 132–4 dynamic models 30 dynamic spread 303–4 efﬁciency, concept of 45 efﬁcient frontier 27, 54, 59, 60 for model with risk-free security 78–9 for reformulated problem 62 for restricted Markowitz model 68 for Sharpe’s simple index model 73 unrestricted and restricted 68 efﬁcient portfolio 53, 54 EGARCH models 320, 373 elasticity, concept of 123 Elton, Gruber and Padberg method 79–85, 265, 269–74 adapting to VaR 270–1 cf VaR 271–4 maximising risk premium 269–70 equities deﬁnition 35 market efﬁciency 44–8 market return 39–40 portfolio risk 42–3 return on 35–8 return on a portfolio 38–9 security risk within a portfolio 43–4 equity capital adequacy ratio 4 equity dynamic models 108–13 equity portfolio diversiﬁcation 51–93 model with risk-free security 75–9 portfolio size and 55–6 principles 515 equity portfolio management strategies 103–8 equity portfolio theory 183 equity valuation models 48–51 equivalence, principle of 117 ergodic estimator 40, 42 estimated variance–covariance matrix method (VC) 201, 202–16, 275, 276, 278 breakdown of ﬁnancial assets 203–5 calculating VaR 209–16 hypotheses and limitations 235–7 installation and use 239–41 mapping cashﬂows with standard maturity dates 205–9 valuation models 237–9 estimator for mean of the population 360 European call 158–9 European option 149 event-based risks 32, 184 ex ante rate 117 ex ante tracking error 285, 287 ex post return rate 121 exchange options 174–5 exchange positions 204

391

exchange risk 12 exercise price of option 149 expected return 40 expected return risk 41, 43 expected value 26 exponential smoothing 318 extrema 326–7, 329–31 extreme value theory 230–4, 365–7 asymptotic results 365–7 attraction domains 366–7 calculation of VaR 233–4 exact result 365 extreme value theorem 230–1 generalisation 367 parameter estimation by regression 231–2 parameter estimation using the semi-parametric method 233, 234 factor-8 mimicking portfolio 290 factor-mimicking portfolios 290 factorial analysis 98 fair value 10 fat tail distribution 231 festoon effect 118, 119 ﬁnal prediction error (FPE) 319 Financial Accounting Standards Board (FASB) 9 ﬁnancial asset evaluation line 107 ﬁrst derivative 325 Fisher’s skewness coefﬁcient 345–6 ﬁxed-income securities 204 ﬁxed-rate bonds 115 ﬁxed rates 301 ﬂoating-rate contracts 301 ﬂoating-rate integration method 311 FRAs 276 Fr´echet’s law 366, 367 frequency 253 fundamental analysis 45 gamma 156, 173, 181, 183 gap 296–7, 298 GARCH models 203, 320 Garman–Kohlhagen formula 175 Gauss-Seidel method, nonlinear 381 generalised error distribution 353 generalised Pareto distribution 231 geometric Brownian motion 112, 174, 218, 237, 356 geometric mean 36 geometric series 123, 210, 328–9 global portfolio optimisation via VaR 274–83 generalisation of asset model 275–7 construction of optimal global portfolio 277–8 method 278–83

392

Index

good practices 6 Gordon – Shapiro formula 48–50, 107, 149 government bonds 18 Greeks 155–7, 172, 181 gross performance level and risk withdrawal 290–1 Gumbel’s law 366, 367

models for bonds 149 static structure of 130–2 internal audit vs. risk management 22–3 internal notation (IN) 4 intrinsic value of option 153 Itˆo formula (Ito lemma) 140, 169, 357 Itˆo process 112, 356

Heath, Jarrow and Morton model 138, 302 hedging formula 172 Hessian matrix 330 high leverage effect 257 Hill’s estimator 233 historical simulation 201, 224–34, 265 basic methodology 224–30 calculations 239 data 238–9 extreme value theory 230–4 hypotheses and limitations 235–7 installation and use 239–41 isolated asset case 224–5 portfolio case 225–6 risk factor case 224 synthesis 226–30 valuation models 237–8 historical volatility 155 histories 199 Ho and Lee model 138 homogeneity tests 361 Hull and White model 302, 303 hypothesis test 361–2

Jensen index 102–3 Johnson distributions 215 joint allocation 289 joint distribution function 342

IAS standards 10 IASB (International Accounting Standards Board) 9 IFAC (International Federation of Accountants) 9 immunisation of bonds 124–5 implied volatility 155 in the money 153, 154 independence tests 361 independent allocation 288 independent random variables 342–3 index funds 103 indifference curves 89 indifference, relation of 86 indirect costs 26 inequalities on calls and puts 159–60 inferential statistics 359–62 estimation 360–1 sampling 359–60 sampling distribution 359–60 instant term interest rate 131 integrated risk management 22, 24–5 interest rate curves 129

kappa see vega kurtosis coefﬁcient

182, 189, 345–6

Lagrangian function 56, 57, 61, 63, 267, 331 for risk-free security model 76 for Sharpe’s simple index model 71 Lagrangian multipliers 57, 331 law of large numbers 223, 224, 344 law of probability 339 least square method 363 legal risk 11, 21, 23–4 Lego approach 316 leptokurtic distribution 41, 182, 183, 189, 218, 345 linear equation system 335–6 linear model 32, 33, 184 linearity condition 202, 203 Lipschitz’s condition 375–6 liquidity bed 316 liquidity crisis 17 liquidity preference 316 liquidity risk 12, 16, 18, 296–7 logarithmic return 37 logistic regression 309–10, 371 log-normal distribution 349–50 log-normal law with parameter 349 long (short) straddle 176 loss distribution approach 13 lottery bonds 116 MacLaurin development 275, 276 mapping cashﬂows 205–9 according to RiskMetricsT M 206–7 alternative 207–8 elementary 205–6 marginal utility 87 market efﬁciency 44–8 market model 91–3 market price of the risk 141 market risk 12 market straight line 94

Index market timing 104–7 Markowitz’s portfolio theory 30, 41, 43, 56–69, 93, 94, 182 ﬁrst formulation 56–60 reformulating the problem 60–9 mathematic valuation models 199 matrix algebra 239 calculus 332–7 diagonal 333 n-order 332 operations 333–4 symmetrical 332–3, 334–5 maturity price of bond 115 maximum outﬂow 17–18 mean 343–4 mean variance 27, 265 for equities 149 measurement theory 344 media risk 12 Merton model 139, 141–2 minimum equity capital requirements 4 modern portfolio theory (MPT) 265 modiﬁed duration 121 money spread 177 monoperiodic models 30 Monte Carlo simulation 201, 216–23, 265, 303 calculations 239 data 238–9 estimation method 218–23 hypotheses and limitations 235–7 installation and use 239–41 probability theory and 216–18 synthesis 221–3 valuation models 237–8 multi-index models 221, 266 multi-normal distribution 349 multivariate random variables 342–3 mutual support 147–9 Nelson and Schaefer model 139 net present value (NPV) 298–9, 302–3 neutral risk 164, 174 New Agreement 4, 5 Newson–Raphson nonlinear iterative method 309, 379–80, 381 Newton’s binomial formula 111, 351 nominal rate of a bond 115, 116 nominal value of a bond 115 non-correlation 347 nonlinear equation systems 380–1 ﬁrst-order methods 377–9 iterative methods 375–7 n-dimensional iteration 381 principal methods 381

393

solving 375–81 nonlinear Gauss-Seidel method 381 nonlinear models independent of time 33 nonlinear regression 234 non-quantiﬁable risks 12–13 normal distribution 41, 183, 188–90, 237, 254, 347–8 normal law 188 normal probability law 183 normality 202, 203, 252–4 observed distribution 254 operational risk 12–14 business continuity plan (BCP) and 16 deﬁnition 6 management 12–13 philosophy of 5–9 triptych 14 options complex 175–7 deﬁnition 149 on bonds 174 sensitivity parameters 155–7 simple 175 strategies on 175–7 uses 150–2 value of 153–60 order of convergence 376 Ornstein – Uhlenbeck process 142–5, 356 OTC derivatives market 18 out of the money 153, 154 outliers 241 Pareto distribution 189, 367 Parsen CAT 319 partial derivatives 329–31 payment and settlement systems 18 Pearson distribution system 183 perfect market 31, 44 performance evaluation 99–108 perpetual bond 123–4 Picard’s iteration 268, 271, 274, 280, 375, 376, 381 pip 247 pockets of inefﬁciency 47 Poisson distribution 350 Poisson process 354–5 Poisson’s law 351 portfolio beta 92 portfolio risk management investment strategy 258 method 257–64 risk framework 258–64 power of the test 362 precautionary surveillance 3, 4–5 preference, relation of 86

394

Index

premium 149 price at issue 115 price-earning ratio 50–1 price of a bond 127 price variation risk 12 probability theory 216–18 process risk 24 product risk 23 pseudo-random numbers 217 put option 149, 152 quadratic form 334–7 qualitative approach 13 quantiﬁable risks 12, 13 quantile 188, 339–40 quantitative approach 13 Ramaswamy and Sundaresan model 139 random aspect of ﬁnancial assets 30 random numbers 217 random variables 339–47 random walk 45, 111, 203, 355 statistical tests for 46 range forwards 177 rate ﬂuctuation risk 120 rate mismatches 297–8 rate risk 12, 303–11 redemption price of bond 115 regression line 363 regressions 318, 362–4 multiple 363–4 nonlinear 364 simple 362–3 regular falsi method 378–9 relative fund risk 287–8 relative global risk 285–7 relative risks 43 replicating portfolios 302, 303, 311–21 with optimal value method 316–21 repos market 18 repricing schedules 301–11 residual risk 285 restricted Markowitz model 63–5 rho 157, 173, 183 Richard model 139 risk, attitude towards 87–9 risk aversion 87, 88 risk factors 31, 184 risk-free security 75–9 risk, generalising concept 184 risk indicators 8 risk management cost of 25–6 environment 7

function, purpose of 11 methodology 19–21 vs back ofﬁce 22 risk mapping 8 risk measurement 8, 41 risk-neutral probability 162, 164 risk neutrality 87 risk of one equity 41 risk of realisation 120 risk of reinvestment 120 risk of reputation 21 risk per share 181–4 risk premium 88 risk return 26–7 risk transfer 14 risk typology 12–19 Risk$TM 224, 228 RiskMetricsTM 202, 203, 206–7, 235, 236, 238, 239–40 scenarios and stress testing 20 Schaefer and Schwartz model 139 Schwarz criterion 319 scope of competence 21 scorecards method 7, 13 security 63–5 security market line 107 self-assessment 7 semi-form of efﬁciency hypothesis 46 semi-parametric method 233 semi-variance 41 sensitivity coefﬁcient 121 separation theorem 94–5, 106 series 328 Sharpe’s multi-index model 74–5 Sharpe’s simple index method 69–75, 100–1, 132, 191, 213, 265–9 adapting critical line algorithm to VaR 267–8 cf VaR 269 for equities 221 problem of minimisation 266–7 VaR in 266–9 short sale 59 short-term interest rate 130 sign test 46 simulation tests for technical analysis methods 46 simulations 300–1 skewed distribution 182 skewness coefﬁcient 182, 345–6 speciﬁc risk 91, 285 speculation bubbles 47 spot 247

Index spot price 150 spot rate 129, 130 spreads 176–7 square root process 145 St Petersburg paradox 85 standard Brownian motion 33, 355 standard deviation 41, 344–5 standard maturity dates 205–9 standard normal law 348 static models 30 static spread 303–4 stationarity condition 202, 203, 236 stationary point 327, 330 stationary random model 33 stochastic bond dynamic models 138–48 stochastic differential 356–7 stochastic duration 121, 147–8 random evolution of rates 147 stochastic integral 356–7 stochastic models 109–13 stochastic process 33, 353–7 particular 354–6 path of 354 stock exchange indexes 39 stock picking 104, 275 stop criteria 376–7 stop loss 258–9 straddles 175, 176 strangles 175, 176 strategic risk 21 stress testing 20, 21, 223 strike 149 strike price 150 strong form of efﬁciency hypothesis 46–7 Student distribution 189, 235, 351–2 Student’s law 367 Supervisors, role of 8 survival period 17–18 systematic inefﬁciency 47 systematic risk 44, 91, 285 allocation of 288–9 tail parameter 231 taste for risk 87 Taylor development 33, 125, 214, 216, 275–6 Taylor formula 37, 126, 132, 327–8, 331 technical analysis 45 temporal aspect of ﬁnancial assets 30 term interest rate 129, 130 theorem of expected utility 86 theoretical reasoning 218 theta 156, 173, 183 three-equity portfolio 54

395

time value of option 153, 154 total risk 43 tracking errors 103, 285–7 transaction risk 23–4 transition bonds 116 trend extrapolations 318 Treynor index 102 two-equity portfolio 51–4

unbiased estimator 360 underlying equity 149 uniform distribution 352 uniform random variable 217 utility function 85–7 utility of return 85 utility theory 85–90, 183

valuation models 30, 31–3, 160–75, 184 value at risk (VaR) 13, 20–1 based on density function 186 based on distribution function 185 bond portfolio case 250–2 breaking down 193–5 calculating 209–16 calculations 244–52 component 195 components of 195 deﬁnition 195–6 estimation 199–200 for a portfolio 190–7 for a portfolio of linear values 211–13 for a portfolio of nonlinear values 214–16 for an isolated asset 185–90 for equities 213–14 heading investment 196–7 incremental 195–7 individual 194 link to Sharp index 197 marginal 194–5 maximum, for portfolio 263–4 normal distribution 188–90 Treasury portfolio case 244–9 typology 200–2 value of basis point (VBP) 19–20, 21, 127, 245–7, 260–3 variable contracts 301 variable interest rates 300–1 variable rate bonds 115 variance 41, 344–5 variance of expected returns approach 183 variance – covariance matrix 336 Vasicek model 139, 142–4, 174

396

Index

vega (kappa) 156, 173 volatility of option 154–5

yield curve 129 yield to maturity (YTM) 250

weak form of the efﬁciency hypothesis 46 Weibull’s law 366, 367 Wiener process 355

zero-coupon bond 115, 123, 129 zero-coupon rates, analysis of correlations on 305–7

Index compiled by Annette Musker

Our partners will collect data and use cookies for ad personalization and measurement. Learn how we and our ad partner Google, collect and use data. Agree & close