# Portfolio Management Of Multiple Strategies Using Python

In this post we are going to review what a portfolio is, the elements it contains, in addition to reviewing some performance measures, later we will create a simple portfolio with two strategies and several instruments.

We will analyze Kelly's method and we will see different combinations that will help us maximize the return and we will compare it with the simple portfolio of equally distributed weights.

Finally, we propose a comparison with the classical method of efficient frontier portfolio management.

Check out my previous article on Introduction To Portfolio Management which explains all that you need to know about Portfolio Management like techniques, types, derivatives, and much more.

In this blog, we will be covering the following topics:

## Introduction

Managing a portfolio or Portfolio Management of multiples strategies do not differ much from how to manage a portfolio of assets, only that in this case, the assets are the strategies we have operational.

Of course, these strategies handle instruments in which we can be long, short or stay waiting. Obviously, the objective of managing a portfolio of strategies is still to maximize return while minimizing risk.

With this simple portfolio, we arrive at the basic question:

How do we distribute capital among the different strategies and instruments in order to maximise the return and minimise the risk?

To have a benchmark with which to compare our optimization we will start from the simple portfolio distributing the same weight for each of its elements.

For the optimization of weights in the capital distribution, there are numerous academic studies, each one trying to optimize different parameters.

Two of the best known and diametrically opposed methods are:

• The efficient frontier proposed by Markowitz in which we try to maximize the return with a certain risk, i.e. it focuses on containing the assigned risk.
• On the other hand, there is Kelly's method proposed by John Kelly and Ed Thorpe that tries to maximise the expectation of the log utility of wealth, i.e. maximizing return is the focus.

It is the trader's responsibility to know these and other methods in order to determine which of them best suits his investment style and risk appetite.

## Efficient Portfolios

An efficient portfolio is defined as a portfolio with minimal risk for a given return, or, equivalently, as the portfolio with the highest return for a given level of risk.”

As algorithmic traders, our portfolio is made up of strategies or rules and each of these manages one or more instruments.

When we only have one strategy managing one instrument, portfolio management is limited to maximizing return while minimizing risk. This would be the simplest portfolio, but not a simple solution.

It is not a simple solution because we have to answer some questions.

Can we achieve the desired return with the instrument we are working with?

Are there other instruments that allow us to achieve a higher return with the same risk or less risk with the same return?

On the other hand, if we want to diversify the portfolio and therefore reduce the risk associated with the strategy or instrument, we must build a portfolio with different instruments and ideally different strategies that capture different market regimes.

Therefore, in addition to the above questions, we need to answer what weight we assign to each strategy and what weight we give to each instrument within the portfolio to achieve the required objective (Max return vs Min risk).

## Portfolio's Elements

Let's define the portfolio's elements which we have some control over them:

• Capital: The amount of money we have available to invest or speculate.
• Instruments: These are the assets available for inclusion in our portfolio management strategy.
• Currency: The currency in which the asset is traded. When we invest in an asset denominated in a foreign currency, we assume the foreign exchange risk.
• Volatility: Also called asset risk, indicates the movement of the asset for the period analysed.
• Cost: Amount of money needed to buy/sell-short an asset.
• Liquidity: It is the capacity of the asset to absorb our operations.
• Rules: These are the strategies that try to take advantage of some market regime.
• Position weight: The amount of capital we allocate to each asset and/or strategy.
• Return: Absolute returns are the return of our portfolio and relative returns are the return of our portfolio compared to a benchmark. When we are not comparing our returns with anybody, absolute return is a good measure, but when we need to compare performances we use the relative return.
• Risk or Volatility: This is the (estimated) amount that the portfolio assumes.

## Portfolio performance measures

Algorithmic traders have at their disposal a large number of measures to analyze the strategy and/or the portfolio performance.

Some of the most used Portfolio performance measures are:

• Annualised Returns
• Annualised Volatility
• Sharpe Ratio
• Sortino Ratio
• Beta
• Treynor Ratio
• Information Ratio
• Skewness
• Kurtosis
• Maximum Drawdown
• Profit ratio
• Holding period

You can find a complete description of these measures in this post.

In addition to these individual measures, the pyfolio library implements a fantastic catalogue of performance measures and graphics that are certainly worth learning to use. We will see some of their performance reports through this post.

## Building a simple portfolio

To build our example portfolio we are going to use a random time series generated to simulate the return of two strategies over several instruments.

• Strategy 1 - The first strategy, that we will call A, is a trend follower system and as it's typical in these strategies, it has a positive bias.
• Strategy 2 - The second strategy, that we will call B, is a mean reversion system and as it's typical in these strategies, it has a negative bias.

### Role of Bias

The bias or skew is an important concept to characterize the behaviour of the strategy, as it is an indicator of the returns' distribution.

#### Positive Bias

When we have a positive bias it means that we are having small frequent losses but we capture the infrequent outliers of the distribution. This behaviour is typical of a trend following system since we have frequent false signals with small losses and infrequent large returns, or what is the same, cuts losses quickly and lets the gains run.

#### Negative Bias

When we have a negative bias it means that we are having small frequent gains and occasional large losses. This behaviour is typical of mean reversion, arbitrage, sell options or similar systems, i.e. we have a system that systematically collects small profits and throws infrequent large losses.

### Strategies to be used

In this post, we will work directly with the returns strategies.

It is needless to say that any strategy that is considered to be part of the portfolio has had to pass backtesting that offers us an adequate level of uncertainty.

Check this post if you need to review the basics of backtesting. What Is Backtesting A Trading Strategy?

## About the instruments or assets

Assets are the main elements of a portfolio and their characteristics are decisive for obtaining the determined risk/benefit ratio. Some of the most important characteristics are:

• Currency
• Volatility
• Liquidity
• Cost
• Commission
• Slippage
• Correlation (in relation to other assets)

#### Currency

If our portfolio is denominated in dollars and we buy an instrument on the European stock exchange, we are buying in euros. Therefore, the return on our investment not only depends on the return of the instrument (or strategy) but also depends on the fate of the currency.

In the short term, it may be insignificant, but in the long term, it may boost return, reduce it or increase losses.

#### Volatility

The volatility of the instrument allows us to estimate if we will be able to reach the desired return or if we will be able to contain the required risk. That is to say, if we want to boost the return, we will generally look for more volatile assets and if we want to contain the risk we will look for less volatile assets.

It is difficult to raise the return of our strategy to 20% with a treasury bond with an annualized return of 3% (perhaps by increasing the position, leverage or other formulas, but it is difficult).

On the other hand, it is difficult to contain the risk of our strategy at 10% if we fill the portfolio of wild penny-stocks with volatilities of more than 300%.

#### Liquidity

The liquidity of an instrument indicates its capacity to absorb our entry or exit position, logically this is more important for strategies that handle large positions, but the liquidity of a single contract can be critical at certain times (expiration date, moments of panic, etc.).

#### Cost

The cost of the asset allows us to know the position and the weight that the asset will have within our portfolio.

Let's suppose that we have a strategy that exploits a characteristic of the gold price. We can invest in gold in multiple ways, among them we can buy Gold futures contracts, e-mini Gold and Micro Gold, we have available Options, ETF, etc. each with a cost, volatility, commissions, slippage, etc.

#### Commission and Slippage

Commissions and slippage undermine the return on our portfolio and should be studied in depth. The slippage is closely related to the bid-ask price.

#### Correlation

Finally, when we are analyzing different instruments to include in our portfolio of strategies it is necessary to take into account the correlation with possible candidates.

For example, if our portfolio strategy is exploiting a trend following system with an e-mini gold contract, it would not make much sense from a diversification point of view to include the future of silver which usually has a high correlation with gold.

Ideally, we will look for low correlation assets to exploit the same strategy.

## Importing the libraries and data

import pandas as pd
import numpy as np
import datetime
import math
from tabulate import tabulate
import matplotlib.pyplot as plt
import seaborn as sns
import cvxopt as opt
from cvxopt import blas, solvers
import cvxpy as cp
import pyfolio as pf


## Multiple Strategies

### Strategy A - Trend following system - Instrument 1

Here we have simulated the return of a trend following strategy and forced it to have a Sharpe Ratio of 0.5 and skewness of 1.

In [ ]:

StrategyA1_SR05_SKW1_returns = pd.read_csv('StrategyA1_SR0.5_SKW1.csv', header=None, parse_dates=True, index_col=0)
StrategyA1_SR05_SKW1_returns.columns=['Return']

StrategyA1_SR05_SKW1_returns.plot(title = 'Daily return - Strategy A Instrument 1', figsize=(12, 6))

cum_datalist=[1+x for x in StrategyA1_SR05_SKW1_returns['Return']]
cum_datalist=pd.DataFrame(cum_datalist, index=StrategyA1_SR05_SKW1_returns.index)
cum_datalist.cumprod().plot(title = 'Cummulative Daily return - Strategy A Instrument 1', figsize=(12, 6))


Out [ ]

<matplotlib.axes._subplots.AxesSubplot at 0x12e83aa20>


### Strategy A - Trend following system - Instrument 2

Here we have simulated the return of a trend following strategy and forced it to have a Sharpe Ratio of 1 and skewness of 1.

In [ ]:

StrategyA2_SR1_SKW1_returns = pd.read_csv('StrategyA2_SR1_SKW1.csv', header=None, parse_dates=True, index_col=0)
StrategyA2_SR1_SKW1_returns.columns=['Return']

StrategyA2_SR1_SKW1_returns.plot(title = 'Daily return - Strategy A Instrument 2', figsize=(12, 6))

cum_datalist=[1+x for x in StrategyA2_SR1_SKW1_returns['Return']]
cum_datalist=pd.DataFrame(cum_datalist, index=StrategyA2_SR1_SKW1_returns.index)
cum_datalist.cumprod().plot(title = 'Cummulative Daily return - Strategy A Instrument 2', figsize=(12, 6))

Out[ ]:

<matplotlib.axes._subplots.AxesSubplot at 0x132139278>

### Strategy A - Trend following system - Instrument 3

Here we have simulated the return of a trend following strategy and forced it to have a Sharpe Ratio of 1 and skewness of 1.

Although it has the same characteristics as the previous one, the volatility is different and allows us to evaluate its contribution within the portfolio.

In [ ]:

StrategyA3_SR1_SKW1_returns = pd.read_csv('StrategyA3_SR1_SKW1.csv', header=None, parse_dates=True, index_col=0)
StrategyA3_SR1_SKW1_returns.columns=['Return']

StrategyA3_SR1_SKW1_returns.plot(title = 'Daily return - Strategy A Instrument 3', figsize=(12, 6))

cum_datalist=[1+x for x in StrategyA3_SR1_SKW1_returns['Return']]
cum_datalist=pd.DataFrame(cum_datalist, index=StrategyA3_SR1_SKW1_returns.index)
cum_datalist.cumprod().plot(title = 'Cummulative Daily return - Strategy A Instrument 3', figsize=(12, 6))


Out[ ]:

<matplotlib.axes._subplots.AxesSubplot at 0x1318d02b0>


### Strategy B - Mean reversion system - Instrument 1

Here we have simulated the return of a mean reversion strategy and forced it to have a Sharpe Ratio of 0.5 and skewness of -1.

In [ ]:

StrategyB1_SR05_SKWn1_returns = pd.read_csv('StrategyB1_SR0.5_SKW-1.csv', header=None, parse_dates=True, index_col=0)
StrategyB1_SR05_SKWn1_returns.columns=['Return']

StrategyB1_SR05_SKWn1_returns.plot(title = 'Daily return - Strategy B Instrument 1', figsize=(12, 6))

cum_datalist=[1+x for x in StrategyB1_SR05_SKWn1_returns['Return']]
cum_datalist=pd.DataFrame(cum_datalist, index=StrategyB1_SR05_SKWn1_returns.index)
cum_datalist.cumprod().plot(title = 'Cummulative Daily return - Strategy B Instrument 1', figsize=(12, 6))


Out[ ]:

<matplotlib.axes._subplots.AxesSubplot at 0x1325b1780>

### Strategy B - Mean reversion system - Instrument 2

Here we have simulated the return of a mean reversion strategy and forced it to have a Sharpe Ratio of 1 and skewness of -1.

In [ ]:

StrategyB2_SR1_SKWn1_returns = pd.read_csv('StrategyB2_SR1_SKW-1.csv', header=None, parse_dates=True, index_col=0)
StrategyB2_SR1_SKWn1_returns.columns=['Return']

StrategyB2_SR1_SKWn1_returns.plot(title = 'Daily return - Strategy B Instrument 2', figsize=(12, 6))

cum_datalist=[1+x for x in StrategyB2_SR1_SKWn1_returns['Return']]
cum_datalist=pd.DataFrame(cum_datalist, index=StrategyB2_SR1_SKWn1_returns.index)
cum_datalist.cumprod().plot(title = 'Cummulative Daily return - Strategy B Instrument 2', figsize=(12, 6))

Out[ ]:

<matplotlib.axes._subplots.AxesSubplot at 0x13219eeb8>

## Portfolio Strategies

### Portfolio with strategies A and B - 5 Instruments

In order to facilitate the analysis, we create a dataframe with all the returns we have.

In [ ]:

Strategies_A_B = pd.concat([StrategyA1_SR05_SKW1_returns, StrategyA2_SR1_SKW1_returns, StrategyA3_SR1_SKW1_returns, StrategyB1_SR05_SKWn1_returns, StrategyB2_SR1_SKWn1_returns], axis=1, ignore_index=False)
Strategies_A_B.columns=['StratA1', 'StratA2', 'StratA3', 'StratB1', 'StratB2']
Strategies_A_B.cumsum().plot(title = 'Daily returns', figsize=(12, 6))

Out[ ]:

<matplotlib.axes._subplots.AxesSubplot at 0x12de894e0>


In [ ]

Strategies_A_B.plot(title="Strategies Returns", figsize=(12,10),subplots=True)

Out[ ]:

array([<matplotlib.axes._subplots.AxesSubplot object at 0x134f15860>,
<matplotlib.axes._subplots.AxesSubplot object at 0x134f74d68>,
<matplotlib.axes._subplots.AxesSubplot object at 0x137d51d30>,
<matplotlib.axes._subplots.AxesSubplot object at 0x137d8dcc0>,
<matplotlib.axes._subplots.AxesSubplot object at 0x137dc7ba8>],
dtype=object)

### Portfolio with the strategy A - 3 Instruments

In order to facilitate the analysis, we create a dataframe with all the strategy A returns.

In [ ]:

Strategy_A = pd.concat([StrategyA1_SR05_SKW1_returns, StrategyA2_SR1_SKW1_returns, StrategyA3_SR1_SKW1_returns], axis=1, ignore_index=False)
Strategy_A.columns=['StratA1', 'StratA2', 'StratA3']
Strategy_A.cumsum().plot(title = 'Strategy A Daily returns', figsize=(12, 6))

Out[ ]:

<matplotlib.axes._subplots.AxesSubplot at 0x131a4ea90>


In [ ]

Strategy_A.plot(title="Strategy A Returns", figsize=(12,10),subplots=True)

Out[ ]:

array([<matplotlib.axes._subplots.AxesSubplot object at 0x13223be10>,
<matplotlib.axes._subplots.AxesSubplot object at 0x131b00128>,
<matplotlib.axes._subplots.AxesSubplot object at 0x12ee19588>],
dtype=object)

### Portfolio with strategy B - 2 Instruments

In order to facilitate the analysis, we create a dataframe with all the strategy B returns.

In [ ]:

Strategy_B = pd.concat([StrategyB1_SR05_SKWn1_returns, StrategyB2_SR1_SKWn1_returns], axis=1, ignore_index=False)
Strategy_B.columns=['StratB1', 'StratB2']
Strategy_B.cumsum().plot(title = 'Strategy B Daily returns', figsize=(12, 6))

Out[ ]:

<matplotlib.axes._subplots.AxesSubplot at 0x1309550b8>

In[ ]:

Strategy_B.plot(title="Strategy B Returns", figsize=(12,10),subplots=True)

Out[ ]:

array([<matplotlib.axes._subplots.AxesSubplot object at 0x135338e80>,
<matplotlib.axes._subplots.AxesSubplot object at 0x12dcb9ac8>],
dtype=object)

## Basic Analysis

Some basic functions to characterize individually the returns.

In [ ]:

Strategies_A_B.describe()

Out[ ]:

StratA1

StratA2

StratA3

StratB1

StratB2

count

2500.000000

2500.000000

2500.000000

2500.000000

2500.000000

mean

0.000336

0.000518

0.000619

0.000625

0.000613

std

0.012382

0.012449

0.012265

0.012167

0.012487

min

-0.022628

-0.023483

-0.022159

-0.061927

-0.062419

25%

-0.008736

-0.008848

-0.008323

-0.006073

-0.006649

50%

-0.001367

-0.001429

-0.001100

0.002460

0.002505

75%

0.007027

0.007482

0.007427

0.009481

0.009931

max

0.066684

0.063726

0.064527

0.023033

0.024209

### Return distribution

In [ ]:

Strategies_A_B.kurtosis()

Out[ ]:

StratA1    1.457507
StratA2    0.905275
StratA3    1.603533
StratB1    1.689942
StratB2    1.401474
dtype: float64

In [ ]:

Strategies_A_B.skew()

Out[ ]:

StratA1    0.975777
StratA2    0.871351
StratA3    1.008925
StratB1   -1.017681
StratB2   -0.960106
dtype: float64

Here we can see the distribution of returns for each of the strategies we have in hand.

In [ ]:

Strategies_A_B.plot(kind="hist", bins=50, subplots=True, figsize=(16,10))

Out[ ]:

array([<matplotlib.axes._subplots.AxesSubplot object at 0x13119f9e8>,
<matplotlib.axes._subplots.AxesSubplot object at 0x12de7d278>,
<matplotlib.axes._subplots.AxesSubplot object at 0x133b6e748>,
<matplotlib.axes._subplots.AxesSubplot object at 0x12e7c9dd8>,
<matplotlib.axes._subplots.AxesSubplot object at 0x130478cf8>],
dtype=object)


As we commented before, correlation is vital to know when asset or strategy returns go hand in hand in future luck. To benefit from diversification, the correlation must be 'low'.

We can calculate the correlation between the returns of the strategies with the whole series

In [ ]:

corr = Strategies_A_B.corr()
corr

Out[ ]:

StratA1

StratA2

StratA3

StratB1

StratB2

StratA1

1.000000

0.002856

-0.025571

0.030596

-0.002940

StratA2

0.002856

1.000000

0.035310

-0.025093

0.020062

StratA3

-0.025571

0.035310

1.000000

0.026725

0.020929

StratB1

0.030596

-0.025093

0.026725

1.000000

-0.011878

StratB2

-0.002940

0.020062

0.020929

-0.011878

1.000000

Or analyze the correlation based on the time horizon we have as an investment

In [ ]:

corr = Strategies_A_B[-60:].corr()
corr

Out[ ]:

StratA1

StratA2

StratA3

StratB1

StratB2

StratA1

1.000000

-0.251070

-0.068042

0.199785

0.128537

StratA2

-0.251070

1.000000

-0.038942

-0.086718

-0.033532

StratA3

-0.068042

-0.038942

1.000000

-0.091949

0.063135

StratB1

0.199785

-0.086718

-0.091949

1.000000

-0.267339

StratB2

0.128537

-0.033532

0.063135

-0.267339

1.000000

The above analyses give us a snapshot at a given time for a given horizon, but the reality is that the correlation varies over time and knowing this allows us to make better estimates.

For example, we can see that the correlation is different if we consider a year:

In [ ]:

Strategies_A_B['StratA1'].rolling(252).corr(Strategies_A_B['StratA2']).plot()

Out[ ]:

<matplotlib.axes._subplots.AxesSubplot at 0x13238bbe0>

A quarter's correlation is greater than the annual correlation.
In[ ]:

Strategies_A_B['StratA1'].rolling(60).corr(Strategies_A_B['StratA2']).plot()

Out[ ]:

<matplotlib.axes._subplots.AxesSubplot at 0x131861198>

If we reduce the horizon to two weeks, we see that the correlation is quite high. Therefore, we see that the correlation is closely related to the analysis horizon.

In [ ]:

Strategies_A_B['StratA1'].rolling(10).corr(Strategies_A_B['StratA2']).plot()

Out[ ]:

<matplotlib.axes._subplots.AxesSubplot at 0x133fba3c8>

## Basic performance analysis

Although logically all performance indicators can be calculated by hand, it is worth knowing the pyfolio library offers us an immense amount of information about the performance of our strategy.

In [ ]:

pf.tears.create_returns_tear_sheet(pd.Series(Strategies_A_B['StratA1']))
Start date 2000-01-03 2009-07-31 119 6.8% 91.5% 19.7% 0.43 0.15 0.68 -46.0% 1.07 0.72 0.98 1.45 1.44 -2.4%

Worst drawdown periods

Net drawdown in %

Peak date

Valley date

Recovery date

Duration

0

45.99

2000-02-25

2001-12-17

2005-01-21

1281

1

31.85

2007-11-08

2008-11-07

NaT

NaN

2

21.48

2005-05-20

2005-08-04

2005-10-21

111

3

14.81

2006-08-04

2007-01-02

2007-03-19

162

4

13.67

2006-01-09

2006-06-19

2006-07-17

136

To understand the information in the report, you can read more in this post: Performance & risk metrics optimization

## Equal weighted portfolio

As you can suppose, the problem we are dealing with is knowing how to distribute the available capital between each of the portfolio strategies that have passed the mandatory robust backtesting.

To know if we are doing well, we need something to compare ourselves with, the benchmark, and it must have characteristics similar to what we want to compare.

For example, the trend following system should be compared with a strategy of buying and holding a portfolio with the same assets.

Here we are going to create a portfolio whose weights are identical for each of the instruments, not differentiate the type of strategy. It serves as a basis for comparing the balance of weights that we will be testing.

In [ ]:

portfolio_total_return = np.sum([0.2, 0.2, 0.2, 0.2, 0.2] * Strategies_A_B, axis=1)

Once the total return of the equally distributed portfolio has been computed, we generate the performance report.

In [ ]:

pf.tears.create_returns_tear_sheet(pd.Series(portfolio_total_return))
Start date 2000-01-03 2009-07-31 119 14.2% 272.8% 8.9% 1.54 1.34 0.97 -10.6% 1.28 2.39 0.10 0.06 1.18 -1.1%

Worst drawdown periods

Net drawdown in %

Peak date

Valley date

Recovery date

Duration

0

10.56

2002-09-23

2003-02-19

2003-04-17

149

1

9.39

2006-01-18

2006-03-07

2006-04-05

56

2

9.13

2001-03-26

2001-08-21

2001-12-20

194

3

8.79

2007-07-25

2008-04-24

2008-12-25

372

4

7.41

2002-01-23

2002-04-19

2002-06-06

97

## Portfolio weights optimized with Kelly criterion

Kelly's criterion is one of the methods available to estimate the weights of our portfolio and maximize the benefit with minimum risk for the analyzed portfolio.

First, we get the number of stocks inside the portfolio.

In [ ]:
no_of_stocks = Strategies_A_B.shape[1]
no_of_stocks

Out[ ]:
5

Compute the variable to get the weights

In [ ]:
weights = cp.Variable(no_of_stocks)
weights.shape

Out[ ]:
(5,)

The portfolio returns are based on the daily return multiplied by the weight for each asset.

In [ ]:
portfolio_returns = (np.array(Strategies_A_B)*weights)
portfolio_returns

Out[ ]:
Expression(AFFINE, UNKNOWN, (2500,))

### Kelly Criterion

The final portfolio value or the utility of the portfolio can be computed using the logarithmic summation of the daily portfolio returns.

In [ ]:
final_portfolio_value = cp.sum(cp.log(1+portfolio_returns))
final_portfolio_value

Out[ ]:
Expression(CONCAVE, UNKNOWN, ())

The output tells us that the final_portfolio_value is an expression and is concave in nature and its value is unknown

The objective of this example is to maximise the Kelly criterion. To do this, you can create a parameter called 'objective' and assign the maximisation condition to it. The Maximize function of the cvxpy library is used for this purpose.

In [ ]:
objective = cp.Maximize(final_portfolio_value)
objective

Out[ ]:
Maximize(Expression(CONCAVE, UNKNOWN, ()))

The output tells us that the objective is a Maximize function over the expression. The expression is concave in nature and its value is unknown

Before you solve the objective, you need to remember that there are certain constraints on the weights of the portfolio.

1. The weights should positive, as you are considering to use a strategy or not.
2. The sum of weights should be less than or equal to 1, as you are not considering leveraging.
In [ ]:
constraints = [0.0<=weights, cp.sum(weights)==1]
constraints

Out[ ]:
[Inequality(Constant(CONSTANT, ZERO, ())),
Equality(Expression(AFFINE, UNKNOWN, ()), Constant(CONSTANT, NONNEGATIVE, ()))]

Here the constraints specify that the first one is an inequality constraint and its value is a constant zero. While the second is an equality constraint between an affine expression whose value is equal to a constant non-negative value

Now, you can combine both the objective and the constraints to create a problem statement. You can do this by using the Problem class of the cvxpy library, as shown below.

In [ ]:
problem = cp.Problem(objective, constraints)
problem

Out[ ]:
Problem(Maximize(Expression(CONCAVE, UNKNOWN, ())), [Inequality(Constant(CONSTANT, ZERO, ())), Equality(Expression(AFFINE, UNKNOWN, ()), Constant(CONSTANT, NONNEGATIVE, ()))])

Here the output describes the entire problem including all the previously described expression into a one single statement

You can use the solve method of the problem class to get the best weight combination as shown below:

In [ ]:
# The optimal objective value is returned by prob.solve().
problem.solve()

# The optimal value for w is stored in w.value.
print(weights.value)


Out[ ]

[2.86865963e-12 2.26342494e-11 3.30438909e-01 3.81809412e-01
2.87751679e-01]


These are the optimal weights according to Kelly's criterion, as we can see the first two strategies have such a small weight that we discard them and simplified, we can say that the remaining three have a similar weight.

In [ ]:
portfolio_total_return_kelly = np.sum(weights.value * Strategies_A_B, axis=1)
pf.tears.create_returns_tear_sheet(portfolio_total_return_kelly)

Start date 2000-01-03 2009-07-31 119 16.1% 340.7% 11.5% 1.36 0.96 0.96 -16.8% 1.24 2.00 -0.20 0.53 1.05 -1.4%

Worst drawdown periods

Net drawdown in %

Peak date

Valley date

Recovery date

Duration

0

16.83

2005-11-07

2006-03-07

2006-05-30

147

1

13.79

2002-08-13

2003-02-19

2003-05-23

204

2

12.97

2000-01-28

2000-06-09

2001-03-05

287

3

10.05

2002-01-23

2002-04-19

2002-06-17

104

4

9.59

2003-09-02

2003-10-16

2004-05-06

178

If we compare it with the initial portfolio with the equally distributed weights, we see that Kelly's optimization has improved the annualized return and the cumulative one, but in contrast, we have decreased the SR and increased the volatility.

### Kelly Strategy A

We are going to repeat the same exercise for the two strategies in an individualized way, in this way we want to know which weights we should assign to each one of the assets of a strategy.

In [ ]:

no_of_stocks = Strategy_A.shape[1]
no_of_stocks
weights = cp.Variable(no_of_stocks)
weights.shape
(np.array(Strategy_A)*weights)
# Save the portfolio returns in a variable
portfolio_returns = (np.array(Strategy_A)*weights)
portfolio_returns
final_portfolio_value = cp.sum(cp.log(1+portfolio_returns))
final_portfolio_value
objective = cp.Maximize(final_portfolio_value)
objective
constraints = [0.0<=weights, cp.sum(weights)==1]
constraints
problem = cp.Problem(objective, constraints)
problem
# The optimal objective value is returned by prob.solve().
problem.solve()

# The optimal value for w is stored in w.value.
print(weights.value)

kelly_portfolio_returnsA = ((Strategy_A)*(weights.value)).sum(axis=1)
kelly_portfolio_value = (1+(kelly_portfolio_returnsA)).cumprod()
kelly_annualized_returnsA = (
(kelly_portfolio_value[-1])**(252/len(Strategy_A)))-1

# Print the annualized returns of the Kelly portfolio
kelly_annualized_returnsA

portfolio_total_return_kellyA = np.sum(weights.value * Strategy_A, axis=1)
pf.tears.create_returns_tear_sheet(portfolio_total_return_kellyA)

portfolio_total_return_equal = np.sum([0.2, 0.2, 0.2] * Strategy_A, axis=1)
pf.tears.create_returns_tear_sheet(portfolio_total_return_equal)

Out[ ]

[9.01444775e-12 1.44402270e-01 8.55597730e-01]
Start date 2000-01-03 2009-07-31 119 14.8% 293.1% 17.0% 0.90 0.58 0.94 -25.4% 1.16 1.55 0.98 1.54 1.50 -2.1%

Worst drawdown periods

Net drawdown in %

Peak date

Valley date

Recovery date

Duration

0

25.36

2002-08-13

2003-01-17

2003-08-26

271

1

17.16

2001-12-12

2002-04-22

2002-08-01

167

2

16.56

2000-03-31

2000-08-29

2001-06-07

310

3

15.45

2005-11-07

2006-03-06

2006-05-08

131

4

13.20

2004-05-13

2004-07-16

2004-10-19

114

Start date 2000-01-03 2009-07-31 119 7.5% 104.1% 6.8% 1.09 0.57 0.93 -13.1% 1.19 1.78 0.58 0.51 1.30 -0.8%

Worst drawdown periods

Net drawdown in %

Peak date

Valley date

Recovery date

Duration

0

13.09

2007-09-13

2008-11-11

NaT

NaN

1

7.71

2002-09-23

2003-02-19

2003-04-17

149

2

7.48

2001-03-28

2001-08-21

2002-08-13

360

3

7.36

2005-05-23

2005-08-04

2005-11-04

120

4

4.72

2003-05-20

2003-08-05

2003-09-30

96

Again we observe the same as with the complete portfolio, with Kelly's optimization we increase returns, but in contrast, we have decreased the SR and increased the volatility.

### Kelly Strategy B

The same for assets in strategy B

In [ ]:

no_of_stocks = Strategy_B.shape[1]
no_of_stocks
weights = cp.Variable(no_of_stocks)
weights.shape
(np.array(Strategy_B)*weights)
# Save the portfolio returns in a variable
portfolio_returns = (np.array(Strategy_B)*weights)
portfolio_returns
final_portfolio_value = cp.sum(cp.log(1+portfolio_returns))
final_portfolio_value
objective = cp.Maximize(final_portfolio_value)
objective
constraints = [0.0<=weights, cp.sum(weights)==1]
constraints
problem = cp.Problem(objective, constraints)
problem
# The optimal objective value is returned by prob.solve().
problem.solve()

# The optimal value for w is stored in w.value.
print(weights.value)

kelly_portfolio_returnsB = ((Strategy_B)*(weights.value)).sum(axis=1)
kelly_portfolio_value = (1+(kelly_portfolio_returnsB)).cumprod()
kelly_annualized_returnsB = (
(kelly_portfolio_value[-1])**(252/len(Strategy_B)))-1

# Print the annualized returns of the Kelly portfolio
kelly_annualized_returnsB

portfolio_total_return_kellyB = np.sum(weights.value * Strategy_B, axis=1)
pf.tears.create_returns_tear_sheet(portfolio_total_return_kellyB)

portfolio_total_return_equal = np.sum([0.2, 0.2] * Strategy_B, axis=1)
pf.tears.create_returns_tear_sheet(portfolio_total_return_equal)

Out[ ]:

[0.55188147 0.44811853]
Start date 2000-01-03 2009-07-31 119 15.8% 327.8% 13.8% 1.13 0.87 0.96 -18.2% 1.20 1.55 -0.68 0.68 0.84 -1.7%

Worst drawdown periods

Net drawdown in %

Peak date

Valley date

Recovery date

Duration

0

18.17

2005-12-27

2006-03-07

2006-06-09

119

1

16.13

2002-12-18

2003-10-14

2004-08-03

425

2

14.76

2000-01-28

2000-06-09

2000-11-16

210

3

11.07

2002-01-29

2002-04-01

2002-05-15

77

4

10.15

2006-11-08

2006-12-13

2007-01-25

57

Start date 2000-01-03 2009-07-31 119 6.3% 82.9% 5.5% 1.13 0.84 0.96 -7.5% 1.20 1.55 -0.67 0.61 0.85 -0.7%

Worst drawdown periods

Net drawdown in %

Peak date

Valley date

Recovery date

Duration

0

7.46

2005-12-27

2006-03-07

2006-05-31

112

1

6.74

2002-12-18

2003-10-14

2004-05-27

377

2

6.21

2000-01-28

2000-06-09

2000-11-16

210

3

4.56

2002-01-29

2002-04-01

2002-05-15

77

4

4.39

2006-11-08

2006-12-13

2007-01-25

57

Again we observe the same as with the complete portfolio, with Kelly's optimization we increase returns, but in contrast, we have decreased the SR and increased the volatility.

### Kelly Strategy A & B

Now that we have the optimized weights for each strategy independently, we are going to optimize the weights that we will assign to each strategy.

In [ ]:

kelly_portfolio_returnsA_=kelly_portfolio_returnsA.to_frame()
kelly_portfolio_returnsB_=kelly_portfolio_returnsB.to_frame()
Kelly_Strat_A_B = pd.concat([kelly_portfolio_returnsA_, kelly_portfolio_returnsB_], axis=1, ignore_index=False)

no_of_stocks = Kelly_Strat_A_B.shape[1]
no_of_stocks
weights = cp.Variable(no_of_stocks)
weights.shape
(np.array(Kelly_Strat_A_B)*weights)
# Save the portfolio returns in a variable
portfolio_returns = (np.array(Kelly_Strat_A_B)*weights)
portfolio_returns
final_portfolio_value = cp.sum(cp.log(1+portfolio_returns))
final_portfolio_value
objective = cp.Maximize(final_portfolio_value)
objective
constraints = [0.0<=weights, cp.sum(weights)==1]
constraints
problem = cp.Problem(objective, constraints)
problem
# The optimal objective value is returned by prob.solve().
problem.solve()

# The optimal value for w is stored in w.value.
print(weights.value)

kelly_portfolio_returnsAB = ((Kelly_Strat_A_B)*(weights.value)).sum(axis=1)
kelly_portfolio_value = (1+(kelly_portfolio_returnsAB)).cumprod()
kelly_annualized_returnsAB = (
(kelly_portfolio_value[-1])**(252/len(Kelly_Strat_A_B)))-1

# Print the annualized returns of the Kelly portfolio
kelly_annualized_returnsB

portfolio_total_return_kellyAB = np.sum(weights.value * Kelly_Strat_A_B, axis=1)
pf.tears.create_returns_tear_sheet(portfolio_total_return_kellyAB)

portfolio_total_return_equal = np.sum([0.2, 0.2] * Kelly_Strat_A_B, axis=1)
pf.tears.create_returns_tear_sheet(portfolio_total_return_equal)

Out[ ]:

[0.31516003 0.68483997]
Start date 2000-01-03 2009-07-31 119 16.0% 337.7% 11.0% 1.41 1.03 0.97 -15.6% 1.25 2.04 -0.32 0.49 1.00 -1.3%

Worst drawdown periods

Net drawdown in %

Peak date

Valley date

Recovery date

Duration

0

15.64

2005-11-07

2006-03-07

2006-05-24

143

1

12.36

2000-01-28

2000-06-09

2000-12-25

237

2

12.34

2002-08-13

2003-02-19

2003-04-21

180

3

9.76

2002-01-23

2002-04-01

2002-06-18

105

4

8.76

2003-09-02

2003-10-16

2004-03-05

134

Start date 2000-01-03 2009-07-31 119 6.3% 82.6% 4.4% 1.39 0.96 0.97 -6.5% 1.25 2.19 0.29 0.66 1.20 -0.5%

Worst drawdown periods

Net drawdown in %

Peak date

Valley date

Recovery date

Duration

0

6.55

2002-08-13

2003-02-19

2003-05-19

200

1

6.43

2005-11-07

2006-03-07

2006-05-11

134

2

4.57

2002-01-23

2002-04-19

2002-06-26

111

3

4.49

2000-01-28

2000-06-09

2001-03-01

285

4

3.57

2007-09-07

2007-12-17

2008-02-28

125

So what we now have are two strategies with several instruments, we have optimized the weight of the instruments for each strategy independently.

We have re-applied the optimization method to the return of each of the portfolios of optimized strategies, giving us the final weight of the portfolio.

We can see it more clearly in a table.

• Capital divided by strategy type according to the last optimization.
• The capital assigned to each strategy is divided by the optimal weights.​​​​​​​

## The Efficient Frontier: Markowitz Portfolio Optimization

We can repeat the same exercise using the efficient frontier proposed by Markowitz. His method is based on maximizing the Sharpe ratio of a portfolio given the mean, standard deviation and correlations.

However, since the method assumes the same volatility and return for each strategy, the weights it offers are more extreme than Kelly's.

Read the following post and webinar for a complete explanation Multi-Strategy Portfolios: Combining Quantitative Strategies Effectively

In [ ]:

# from https://plotly.com/python/v3/ipython-notebooks/markowitz-portfolio-optimization/
def rand_weights(n):
''' Produces n random weights that sum to 1 '''
k = np.random.rand(n)
return k / sum(k)

def random_portfolio(returns):
'''
Returns the mean and standard deviation of returns for a random portfolio , annualized
'''

p = np.asmatrix(np.mean(returns, axis=1)) * 252
w = np.asmatrix(rand_weights(returns.shape[0]))
C = np.asmatrix(np.cov(returns)) * 252

mu = w * p.T
sigma = np.sqrt(w * C * w.T)

# This recursion reduces outliers to keep plots pretty
if sigma > 2*252:
return random_portfolio(returns)
return mu, sigma

In [ ]:

return_vec = Strategies_A_B.dropna().values.T
return_vec

In [ ]:

n_portfolios = 10000
means, stds = np.column_stack([
random_portfolio(return_vec)
for _ in range(n_portfolios)
])

In [ ]:

plt.figure(figsize=(16, 10))
plt.plot(stds, means, 'o', markersize=5)
plt.xlabel('std')
plt.ylabel('mean')
plt.title('Mean and standard deviation of returns of random Strategy generated portfolios')

In [ ]:

# Turn off progress printing
solvers.options['show_progress'] = False

def optimal_portfolio(returns):
n = len(returns)
returns = np.asmatrix(returns)

N = 10000
mus = [10**(5.0 * t/N - 1.0) for t in range(N)]

# Convert to cvxopt matrices
S = opt.matrix(np.asmatrix(np.cov(returns)*252))
pbar = opt.matrix(np.asmatrix(np.mean(returns, axis=1)*252))

# Create constraint matrices
G = -opt.matrix(np.eye(n))   # negative n x n identity matrix
h = opt.matrix(0.0, (n ,1))
A = opt.matrix(1.0, (1, n))
b = opt.matrix(1.0)

# Calculate efficient frontier weights using quadratic programming
portfolios = [solvers.qp(mu*S, -pbar, G, h, A, b)['x']
for mu in mus]
## CALCULATE RISKS AND RETURNS FOR FRONTIER
returns = [blas.dot(pbar, x) for x in portfolios]
risks = [np.sqrt(blas.dot(x, S*x)) for x in portfolios]
## CALCULATE THE 2ND DEGREE POLYNOMIAL OF THE FRONTIER CURVE
m1 = np.polyfit(returns, risks, 2)
x1 = np.sqrt(m1[2] / m1[0])
# CALCULATE THE OPTIMAL PORTFOLIO
wt = solvers.qp(opt.matrix(x1 * S), -pbar, G, h, A, b)['x']
return np.asarray(wt), returns, risks

weights, returns, risks = optimal_portfolio(return_vec)

plt.figure(figsize=(16, 10))
plt.plot(stds, means, 'o')
plt.ylabel('mean')
plt.xlabel('std')
plt.plot(risks, returns, 'y-o')

In [ ]:

import scipy.interpolate as sci
import scipy.optimize as sciopt

def getListOfUniqueWithinPrecision(sortedArray):
ind = 0
currentVal = 0
diffToIgnore = 0.00000001
listOfIndices = [];
for i in range(sortedArray.size):
if(sortedArray[i] - diffToIgnore > currentVal):
listOfIndices.append(i);
currentVal = sortedArray[i];
return listOfIndices;

In [ ]:

twoRowsArrayForSorting = np.vstack([returns, risks]).T;
rowsAfterSorting = twoRowsArrayForSorting[twoRowsArrayForSorting[:,0].argsort()].T
returnsSorted = rowsAfterSorting[0,:];
risksSorted = rowsAfterSorting[1,:];
listOfInd = getListOfUniqueWithinPrecision(risksSorted);
risksSorted  = risksSorted[listOfInd];
returnsSorted  = returnsSorted[listOfInd];
ind = np.argmin(risksSorted)
evols = risksSorted[ind:]
erets = returnsSorted[ind:]
tck = sci.splrep(evols, erets)

In [ ]:

def f(x):
''' Efficient frontier function (splines approximation). '''
return sci.splev(x, tck, der=0)
def df(x):
''' First derivative of efficient frontier function. '''
return sci.splev(x, tck, der=1)

def equations(p, rf=0.0091):
eq1 = rf - p[0]
eq2 = rf + p[1] * p[2] - f(p[2])
eq3 = p[1] - df(p[2])
return eq1, eq2, eq3

opt = sciopt.fsolve(equations, [0.0091, 0.5, 0.05])

opt

In [ ]:

np.round(equations(opt), 5)

In [ ]:

plt.figure(figsize=(16, 10))
plt.plot(stds, means, 'o')
# random portfolio composition
plt.plot(evols, erets, 'y', lw=2.0)
# efficient frontier
cx = np.linspace(0.0, 0.3)
plt.plot(cx, opt[0] + opt[1] * cx, lw=1.0)
# capital market line
plt.plot(opt[2], f(opt[2]), 'r*', markersize=11.0)
plt.grid(True)
plt.axhline(0, color='k', ls='--', lw=2.0)
plt.axvline(0, color='k', ls='--', lw=2.0)
plt.xlabel('Expected Volatility')
plt.ylabel('Expected Return')
plt.title("Portfolio Efficent Frontier with Capital Market Line, RF= 0.91%")


In [ ]:

weights = pd.DataFrame(weights, index=Strategies_A_B.columns)*100
weights.columns=["Percent"]
round(weights, 2)

In [ ]:

Strategies_A_B.mean()/Strategies_A_B.std()

In [ ]:

Strategies_A_B=Strategies_A_B.dropna()
#Strategy_A = pf.utils.get_symbol_rets('FB')

In [ ]:

portfolio_total_return = (0.2 * Strategies_A_B['StratA1']) + (0.2 * Strategies_A_B['StratA2']) + (0.2 * Strategies_A_B['StratA3']) + (0.2 * Strategies_A_B['StratB1']) + (0.2 * Strategies_A_B['StratB2'])

In [ ]:

pf.tears.create_returns_tear_sheet(portfolio_total_return)

In [ ]:

portfolio_total_return2 = np.sum([0.2, 0.2, 0.2, 0.2, 0.2] * Strategies_A_B, axis=1)

In [ ]:

pf.tears.create_returns_tear_sheet(portfolio_total_return2)

In [ ]:

portfolio_total_return_kelly = np.sum([2.86865963e-12, 2.26342494e-11, 3.30438909e-01, 3.81809412e-01, 2.87751679e-01] * Strategies_A_B, axis=1)

In [ ]:

pf.tears.create_returns_tear_sheet(portfolio_total_return_kelly)

In [ ]:

portfolio_total_return_markowitz = np.sum([0, 0, 0.336, 0.616, 0.047] * Strategies_A_B, axis=1)

In [ ]:

pf.tears.create_returns_tear_sheet(portfolio_total_return_markowitz)

## Conclusion

The optimization of strategy portfolios is not far from the optimization that we will make to a portfolio of instruments since we are working with returns.

Strategies and instruments must be characterized in order to know when they add value to the portfolio or not (although the strategy offers interesting returns, it may not add value to the portfolio as a whole or even increase the risk).

Here we have seen two methods that offer different results basically because their assumptions are different, there are other methods with different results, this detail is important depending on the type of portfolio we are trying to optimize.

If you are a trader who faces some of the inevitable questions on a day to day basis like - Where should I invest? How much risk to take? How to reduce portfolio volatility? - be sure to check our previous webinar on Quantitative Portfolio Management Strategies.

We hope this blog has been helpful to you. You could also check out all our blogs on Portfolio Management here. Please feel free to share your comments below.

Disclaimer: All investments and trading in the stock market involve risk. Any decisions to place trades in the financial markets, including trading in stock or options or other financial instruments is a personal decision that should only be made after thorough research, including a personal risk and financial assessment and the engagement of professional assistance to the extent you believe necessary. The trading strategies or related information mentioned in this article is for informational purposes only.