# Equity versus fixed income: the predictive power of bank surveys

## Contents

# Equity versus fixed income: the predictive power of bank surveys#

This notebook serves as an illustration of the points discussed in the post “Equity versus fixed income: the predictive power of bank surveys” available on the Macrosynergy website.

Bank lending surveys help predict the relative performance of equity and duration positions. Signals of strengthening credit demand and easing lending conditions favor a stronger economy and expanding leverage, supporting equity returns. Signs of deteriorating credit demand and tightening credit supply bode for a weaker economy and more accommodative monetary policy, supporting duration returns. Empirical evidence for developed markets strongly supports these propositions. Since 2000, bank survey scores have been a significant predictor of equity versus duration returns. They helped create uncorrelated returns in both asset classes, as well as for a relative asset class book.

This notebook provides the essential code required to replicate the analysis discussed in the post.

The notebook covers the three main parts:

Get Packages and JPMaQS Data: This section is responsible for installing and importing the necessary Python packages that are used throughout the analysis.

Transformations and Checks: In this part, the notebook performs various calculations and transformations on the data to derive the relevant signals and targets used for the analysis, including constructing weighted average credit demand, average developed markets equity and duration returns, and relative equity vs. duration returns.

Value Checks: This is the most critical section, where the notebook calculates and implements the trading strategies based on the hypotheses tested in the post. Depending on the analysis, this section involves backtesting various trading strategies targeting equity, fixed income and relative returns. The strategies utilize the inflation indicators and other signals derived in the previous section.

It’s important to note that while the notebook covers a selection of indicators and strategies used for the post’s main findings, there are countless other possible indicators and approaches that can be explored by users. Users can modify the code to test different hypotheses and strategies based on own research and ideas. Best of luck with your research!

## Get packages and JPMaQS data#

This notebook primarily relies on the standard packages available in the Python data science stack. However, there is an additional package `macrosynergy`

that is required for two purposes:

Downloading JPMaQS data: The

`macrosynergy`

package facilitates the retrieval of JPMaQS data, which is used in the notebook.For the analysis of quantamental data and value propositions: The

`macrosynergy`

package provides functionality for performing quick analyses of quantamental data and exploring value propositions.

For detailed information and a comprehensive understanding of the `macrosynergy`

package and its functionalities, please refer to the “Introduction to Macrosynergy package” notebook on the Macrosynergy Quantamental Academy or visit the following link on Kaggle.

```
# Uncomment below if the latest macrosynergy package is not installed
"""
%%capture
! pip install git+https://github.com/macrosynergy/macrosynergy@develop""";
```

```
import numpy as np
import pandas as pd
from pandas import Timestamp
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
import os
from datetime import date
import macrosynergy.management as msm
import macrosynergy.panel as msp
import macrosynergy.signal as mss
import macrosynergy.pnl as msn
from macrosynergy.download import JPMaQSDownload
warnings.simplefilter("ignore")
```

The JPMaQS indicators we consider are downloaded using the J.P. Morgan Dataquery API interface within the `macrosynergy`

package. This is done by specifying ticker strings, formed by appending an indicator category code `DB(JPMAQS,<cross_section>_<category>,<info>)`

, where

`value`

giving the latest available values for the indicator
`eop_lag`

referring to days elapsed since the end of the observation period
`mop_lag`

referring to the number of days elapsed since the mean observation period
`grade`

denoting a grade of the observation, giving a metric of real-time information quality.

After instantiating the `JPMaQSDownload`

class within the `macrosynergy.download`

module, one can use the `download(tickers,start_date,metrics)`

method to easily download the necessary data, where `tickers`

is an array of ticker strings, `start_date`

is the first collection date to be considered and `metrics`

is an array comprising the times series information to be downloaded. For more information see here.

```
# Cross-sections of interest
cids_dm = [
"EUR",
"GBP",
"JPY",
"CAD",
"USD",
]
cids = cids_dm
```

```
# Quantamental categories of interest
main = [
# Demand
"BLSDSCORE_NSA",
# Supply
"BLSCSCORE_NSA",
]
econ = [
"USDGDPWGT_SA_3YMA"
] # economic context
mark = [
"DU05YXR_VT10",
"EQXR_VT10",
] # market context
xcats = main + econ + mark
# Extra tickers
xtix = ["USD_GB10YXR_NSA"]
# Resultant tickers
tickers = [cid + "_" + xcat for cid in cids for xcat in xcats] + xtix
print(f"Maximum number of tickers is {len(tickers)}")
```

```
Maximum number of tickers is 26
```

JPMaQS indicators are conveniently grouped into 6 main categories: Economic Trends, Macroeconomic balance sheets, Financial conditions, Shocks and risk measures, Stylized trading factors, and Generic returns. Each indicator has a separate page with notes, description, availability, statistical measures, and timelines for main currencies. The description of each JPMaQS category is available under Macro quantamental academy. For tickers used in this notebook see Bank survey scores, Global production shares, Duration returns, and Equity index future returns.

```
start_date = "2000-01-01"
#end_date = "2023-05-01"
# Retrieve credentials
client_id: str = os.getenv("DQ_OAUTH_CLIENT_ID")
client_secret: str = os.getenv("DQ_OAUTH_SECRET")
with JPMaQSDownload(client_id=client_id, client_secret=client_secret) as dq:
df = dq.download(
tickers=tickers,
start_date=start_date,
#end_date=end_date,
suppress_warning=True,
metrics=["all"],
report_time_taken=True,
show_progress=True,
report_egress=True,
)
```

```
Downloading data from JPMaQS.
Timestamp UTC: 2023-09-08 16:03:20
Connection successful!
Number of expressions requested: 104
```

```
Requesting data: 100%|██████████| 6/6 [00:01<00:00, 3.30it/s]
Downloading data: 100%|██████████| 6/6 [00:52<00:00, 8.67s/it]
```

```
Time taken to download data: 55.14 seconds.
Time taken to convert to dataframe: 1.43 seconds.
Average upload size: 0.20 KB
Average download size: 16200.64 KB
Average time taken: 18.78 seconds
Longest time taken: 53.79 seconds
Average transfer rate : 6902.27 Kbps
```

```
dfx = df.copy().sort_values(["cid", "xcat", "real_date"])
dfx.info()
```

```
<class 'pandas.core.frame.DataFrame'>
Int64Index: 146111 entries, 8602 to 146110
Data columns (total 7 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 real_date 146111 non-null datetime64[ns]
1 cid 146111 non-null object
2 xcat 146111 non-null object
3 value 146111 non-null float64
4 grading 146111 non-null float64
5 eop_lag 146111 non-null float64
6 mop_lag 146111 non-null float64
dtypes: datetime64[ns](1), float64(4), object(2)
memory usage: 8.9+ MB
```

## Availability#

It is important to assess data availability before conducting any analysis. It allows identifying any potential gaps or limitations in the dataset, which can impact the validity and reliability of analysis and ensure that a sufficient number of observations for each selected category and cross-section is available as well as determining the appropriate time periods for analysis.

```
msm.check_availability(df, xcats=main, cids=cids, missing_recent=True)
```

```
msm.check_availability(df, xcats=econ+mark, cids=cids, missing_recent=True)
```

# Transformations and checks#

In this part, we perform simple calculations and transformations on the data to derive the relevant signals and targets used for the analysis.

## Features#

In the presented chart, we combine bank lending survey scores, specifically the credit demand z-score labeled as `BLSDSCORE_NSA`

, and the credit supply z-score denoted as `BLSCSCORE_NSA`

, for developed countries (EUR, GBP, JPY, CAD, and USD). This aggregation is accomplished by assigning weights to individual country scores based on their respective proportions of global GDP and industrial production, with these proportions calculated as a three-year moving average. Subsequently, both the combined credit demand z-score and credit supply z-score are grouped under the identifier `GDM`

.

To visualize these combined indicators, we utilize the `view_timelines()`

function from the macrosynergy package. You can find more information about this function here

```
cidx = cids_dm
xcatx = ["BLSDSCORE_NSA", "BLSCSCORE_NSA"]
dfa = pd.DataFrame(columns=list(dfx.columns))
for xc in xcatx:
dfaa = msp.linear_composite(
df=dfx,
xcats=xc,
cids=cidx,
weights="USDGDPWGT_SA_3YMA",
new_cid="GDM",
complete_cids=False,
complete_xcats=False,
)
dfa = msm.update_df(dfa, dfaa)
dfx = msm.update_df(dfx, dfa)
cidx = ["GDM"]
sdate = "2000-01-01"
msp.view_timelines(
dfx,
xcats=xcatx,
cids=cidx,
ncol=2,
cumsum=False,
start=sdate,
same_y=False,
size=(12, 8),
all_xticks=True,
title="Quantamental bank lending scores, developed markets average, information states",
title_fontsize=18,
title_adj=1.01,
xcat_labels=[
"Survey score of loan demand",
"Survey score of loan standards (supply conditions)",
],
label_adj=0.3,
)
```

## Targets#

### Equity and duration returns#

In this section, we combine the returns of various countries into a basket of developed market returns, with each country contributing equally. We achieve this by employing a predefined list of developed market currencies, referred to as cids_dm, which includes as before for features EUR, GBP, JPY, CAD, and USD. These currencies are utilized to assign a new category or cross-section, labeled as `GDM`

, to the respective averages of two key indicators:

vol-targeted equity returns denoted as “EQXR_VT10” (representing the front future of major equity indices, such as the Standard and Poor’s 500 Composite in USD, EURO STOXX 50 in EUR, Nikkei 225 Stock Average in JPY, FTSE 100 in GBP, and the Toronto Stock Exchange 60 Index in CAD).

vol-targeted duration returns represented as “DU05YXR_VT10” (reflecting returns on 5-year interest rate swap fixed receiver positions, with a monthly roll assumption).

To visualize these combined indicators, we utilize the `view_timelines()`

function from the macrosynergy package. You can find more information about this function here

```
xcatx = ["EQXR_VT10", "DU05YXR_VT10"]
dict_bsks = {
"GDM": cids_dm,
"G3": ["EUR", "JPY", "USD"],
}
dfa = pd.DataFrame(columns=list(dfx.columns))
for xc in xcatx:
for key, value in dict_bsks.items():
dfaa = msp.linear_composite(
df=dfx,
xcats=xc,
cids=value,
new_cid=key,
complete_cids=False,
)
dfa = msm.update_df(dfa, dfaa)
dfx = msm.update_df(dfx, dfa)
cidx = ["GDM"]
sdate = "2000-01-01"
msp.view_timelines(
dfx,
xcats=xcatx,
cids=cidx,
ncol=2,
cumsum=True,
start=sdate,
same_y=False,
size=(12, 8),
all_xticks=True,
title="Vol-targeted equity and duration basket returns, % cumulative, no compounding",
title_fontsize=18,
title_adj=1.01,
xcat_labels=[
"Equity index future returns, 10% vol target, DM5 basket",
"5-year IRS receiver returns, 10% vol target, DM5 basket",
],
label_adj=0.3,
)
```

### Equity versus duration returns#

In the following cell, we compute relative returns for developed markets. We establish a fresh metric `EQvDUXR_VT10`

, defined as the straightforward difference between `EQXR_VT10`

and `DU05YXR_VT10`

. Subsequently, we consolidate individual country indicators into a unified metric, employing equal weighting. Similar to our previous approach, this newly combined metric, `EQvDUXR_VT10`

, is categorized under the cross-sectional identifier `GDM`

.

```
cidx = cids_dm
calcs = [
"EQvDUXR_VT10 = EQXR_VT10 - DU05YXR_VT10",
]
dfa = msp.panel_calculator(df, calcs=calcs, cids=cidx)
dfx = msm.update_df(dfx, dfa)
xcatx = ["EQvDUXR_VT10"]
dict_bsks = {
"GDM": cids_dm,
"G3": ["EUR", "JPY", "USD"],
}
dfa = pd.DataFrame(columns=list(dfx.columns))
for xc in xcatx:
for key, value in dict_bsks.items():
dfaa = msp.linear_composite(
df=dfx,
xcats=xc,
cids=value,
new_cid=key,
complete_cids=False,
)
dfa = msm.update_df(dfa, dfaa)
dfx = msm.update_df(dfx, dfa)
```

# Value checks#

In this part of the analysis, the notebook calculates the naive PnLs (Profit and Loss) for directional equity, fixed income, and relative strategies using bank lending scores. The PnLs are calculated based on simple trading strategies that utilize the bank lending scores as signals (no regression analysis is involved). The strategies involve going long (buying) or short (selling) on respective asset positions based purely on the direction of the excess inflation signals.

To evaluate the performance of these strategies, the notebook computes various metrics and ratios, including:

Correlation: Measures the relationship between the inflation-based strategy returns and the actual returns. Positive correlations indicate that the strategy moves in the same direction as the market, while negative correlations indicate an opposite movement.

Accuracy Metrics: These metrics assess the accuracy of inflation-based strategies in predicting market movements. Common accuracy metrics include accuracy rate, balanced accuracy, precision, etc.

Performance Ratios: Various performance ratios, such as Sharpe ratio, Sortino ratio, Max draws etc.

The notebook compares the performance of these simple inflation-based strategies with the long-only performance of the respective asset classes.

It’s important to note that the analysis deliberately disregards transaction costs and risk management considerations. This is done to provide a more straightforward comparison of the strategies’ raw performance without the additional complexity introduced by transaction costs and risk management, which can vary based on trading size, institutional rules, and regulations.

The analysis in the post and sample code in the notebook is a proof of concept only, using the simplest design.

## Duration returns#

In this section, we delve into the examination of the connection between bank survey scores and the ensuing 5-year IRS (Interest Rate Swap) fixed receiver returns for the aggregate of developed markets. Staying consistent with earlier notebooks, we establish the primary signal as `ms`

, represented by `BLSDSCORE_NSA`

, the target, `targ`

as `DU05YXR_VT10`

, and the alternative signal denoted as `rivs`

, which corresponds to `BLSCSCORE_NSA`

.

```
bls = [
"BLSDSCORE_NSA",
"BLSCSCORE_NSA",
]
sigs = bls
ms = "BLSDSCORE_NSA"
os = list(set(sigs) - set([ms])) # other signals
targ = "DU05YXR_VT10"
cidx = ["GDM", ]
dict_dubk = {
"df": dfx,
"sig": ms,
"rivs": os,
"targ": targ,
"cidx": cidx,
"black": None,
"srr": None,
"pnls": None,
}
```

We utilize the `CategoryRelations()`

function from the `macrosynergy`

package to visualize the connection between the bank survey credit demand score BLSDSCORE_NSA and the subsequent IRS (Interest Rate Swap) return. As anticipated, the visualization confirms a negative and statistically significant relationship at a 5% significance level. This finding aligns with the expected relationship between credit demand scores and IRS returns. You can access more details on this analysis by referring to the provided link

```
dix = dict_dubk
dfr = dix["df"]
sig = dix["sig"]
targ = dix["targ"]
cidx = dix["cidx"]
crx = msp.CategoryRelations(
dfr,
xcats=[sig, targ],
cids=cidx,
freq="M",
lag=1,
xcat_aggs=["last", "sum"],
xcat_trims=[None, None],
)
crx.reg_scatter(
labels=False,
coef_box="lower left",
xlab="Bank lending survey, credit demand z-score",
ylab="5-year IRS return, vol-targeted at 10%, next month, %",
title="Bank survey credit demand score and subsequent IRS returns of developed market basket",
size=(10, 6),
)
```

Conducting a parallel analysis by employing the alternate bank survey metric, the credit supply score labeled as `BLSCSCORE_NSA`

, and subsequently examining the IRS (Interest Rate Swap) returns reveals a notably weaker and less statistically significant relationship. The underlying reasons for this diminished correlation are elaborated upon in the accompanying post.

```
dix = dict_dubk
dfr = dix["df"]
sig = "BLSCSCORE_NSA"
targ = dix["targ"]
cidx = dix["cidx"]
crx = msp.CategoryRelations(
dfr,
xcats=[sig, targ],
cids=cidx,
freq="M",
lag=1,
xcat_aggs=["last", "sum"],
start="1995-01-01",
xcat_trims=[None, None],
)
crx.reg_scatter(
labels=False,
coef_box="lower left",
xlab="Bank lending survey, credit conditions z-score (higher = easier)",
ylab="5-year IRS return, vol-targeted at 10%, next month, %",
title="Bank survey credit conditions and subsequent IRS returns of developed market basket",
size=(10, 6),
)
```

The table below displays the accuracy of both bank survey signals using standard accuracy metrics:

```
dix = dict_dubk
dfr = dix["df"]
sig = dix["sig"]
rivs = dix["rivs"]
targ = dix["targ"]
cidx = dix["cidx"]
srr = mss.SignalReturnRelations(
dfr,
cids=cidx,
sig=sig,
rival_sigs=rivs,
sig_neg=True,
ret=targ,
freq="M",
start="1995-01-01",
)
dix["srr"] = srr
srrx = dix["srr"]
display(srrx.signals_table().sort_index().astype("float").round(3))
```

accuracy | bal_accuracy | pos_sigr | pos_retr | pos_prec | neg_prec | pearson | pearson_pval | kendall | kendall_pval | |
---|---|---|---|---|---|---|---|---|---|---|

BLSCSCORE_NSA_NEG | 0.493 | 0.492 | 0.511 | 0.553 | 0.545 | 0.439 | 0.070 | 0.238 | 0.036 | 0.369 |

BLSDSCORE_NSA_NEG | 0.532 | 0.531 | 0.507 | 0.553 | 0.583 | 0.479 | 0.117 | 0.049 | 0.085 | 0.033 |

### Naive PnL#

The `NaivePnl()`

class is specifically designed to offer a quick and straightforward overview of a simplified Profit and Loss (PnL) profile associated with a set of trading signals. The term “naive” is used because the methods within this class do not factor in transaction costs or position limitations, which may include considerations related to risk management. This omission is intentional because the impact of costs and limitations varies widely depending on factors such as trading size, institutional rules, and regulatory requirements.

As its primary objective, the class focuses on tracking the average IRS (Interest Rate Swap) return for developed markets, specifically the ‘DU05YXR_VT10,’ alongside the trading signals BLSDSCORE_NSA (credit demand z-score) and BLSCSCORE_NSA (credit supply z-score). It accommodates both binary PnL calculations, where signals are simplified into long (1) or short (-1) positions, and proportionate PnL calculations.

For more in-depth information regarding the `NaivePnl() class and its functionalities, you can refer to the provided link here

```
dix = dict_dubk
dfr = dix["df"]
sigx = [dix["sig"]] + dix["rivs"]
targ = dix["targ"]
cidx = dix["cidx"]
naive_pnl = msn.NaivePnL(
dfr,
ret=targ,
sigs=sigx,
cids=cidx,
start="2000-01-01",
# bms=["USD_EQXR_NSA", "USD_GB10YXR_NSA"],
)
dict_pnls = {
"PZN0": {"sig_add": 0, "sig_op": "zn_score_pan"},
"PZN1": {"sig_add": 1, "sig_op": "zn_score_pan"},
"BIN0": {"sig_add": 0, "sig_op": "binary"},
"BIN1": {"sig_add": 1, "sig_op": "binary"},
}
for key, value in dict_pnls.items():
for sig in sigx:
naive_pnl.make_pnl(
sig,
sig_neg=True,
sig_add=value["sig_add"],
sig_op=value["sig_op"],
thresh=3,
rebal_freq="monthly",
vol_scale=10,
rebal_slip=1,
pnl_name=sig + "_" + key,
)
naive_pnl.make_long_pnl(vol_scale=10, label="Long only")
dix["pnls"] = naive_pnl
```

```
dix = dict_dubk
sigx = [dix["sig"]] + dix["rivs"]
naive_pnl = dix["pnls"]
pnls = [sig + type for type in ["_PZN0", "_BIN0"] for sig in sigx]
naive_pnl.plot_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
title="Naive PnLs for IRS baskets, based on survey scores, no bias",
xcat_labels=[
"based on credit demand score, proportionate",
"based on credit conditions score, proportionate",
"based on credit demand score, binary",
"based on credit conditions score, binary",
],
figsize=(16, 8),
)
```

```
dix = dict_dubk
sigx = [dix["sig"]] + dix["rivs"]
naive_pnl = dix["pnls"]
pnls = [sig + type for sig in sigx for type in ["_PZN0", "_PZN1", "_BIN0", "_BIN1"]] + ["Long only"]
df_eval = naive_pnl.evaluate_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
start="2000-01-01",
)
display(df_eval.transpose())
```

Return (pct ar) | St. Dev. (pct ar) | Sharpe Ratio | Sortino Ratio | Max 21-day draw | Max 6-month draw | Traded Months | |
---|---|---|---|---|---|---|---|

xcat | |||||||

BLSCSCORE_NSA_BIN0 | 1.223353 | 10.0 | 0.122335 | 0.173114 | -13.743513 | -20.413889 | 285 |

BLSCSCORE_NSA_BIN1 | 3.130152 | 10.0 | 0.313015 | 0.442775 | -19.17171 | -27.218277 | 285 |

BLSCSCORE_NSA_PZN0 | 3.099813 | 10.0 | 0.309981 | 0.445465 | -20.764943 | -25.277125 | 285 |

BLSCSCORE_NSA_PZN1 | 3.968332 | 10.0 | 0.396833 | 0.567533 | -18.368463 | -24.550183 | 285 |

BLSDSCORE_NSA_BIN0 | 3.891083 | 10.0 | 0.389108 | 0.558398 | -12.517601 | -20.419225 | 285 |

BLSDSCORE_NSA_BIN1 | 5.029574 | 10.0 | 0.502957 | 0.72133 | -18.353081 | -24.92399 | 285 |

BLSDSCORE_NSA_PZN0 | 4.626679 | 10.0 | 0.462668 | 0.679379 | -19.770958 | -21.977027 | 285 |

BLSDSCORE_NSA_PZN1 | 5.313217 | 10.0 | 0.531322 | 0.769027 | -18.169398 | -20.610818 | 285 |

Long only | 3.090069 | 10.0 | 0.309007 | 0.436399 | -13.73263 | -28.227439 | 285 |

## Equity returns#

Similar to our examination of fixed income returns, we proceed to explore the connections between bank lending survey scores and consequent equity returns, specifically those targeted at 10% volatility. We initiate this analysis with the bank survey demand score and its correlation with consequent monthly equity index future returns. Evidently, the relationship between these variables exhibits a positive and statistically significant relationship.

```
sigs = bls
ms = "BLSDSCORE_NSA"
os = list(set(sigs) - set([ms])) # other signals
targ = "EQXR_VT10"
cidx = ["GDM", ]
dict_eqbk = {
"df": dfx,
"sig": ms,
"rivs": os,
"targ": targ,
"cidx": cidx,
"black": None,
"srr": None,
"pnls": None,
}
```

```
sigs = bls
ms = "BLSDSCORE_NSA"
os = list(set(sigs) - set([ms])) # other signals
targ = "EQXR_VT10"
cidx = ["GDM", ]
dict_eqbk = {
"df": dfx,
"sig": ms,
"rivs": os,
"targ": targ,
"cidx": cidx,
"black": None,
"srr": None,
"pnls": None,
}
dix = dict_eqbk
dfr = dix["df"]
sig = dix["sig"]
targ = dix["targ"]
cidx = dix["cidx"]
crx = msp.CategoryRelations(
dfr,
xcats=[sig, targ],
cids=cidx,
freq="M",
lag=1,
xcat_aggs=["last", "sum"],
xcat_trims=[None, None],
)
crx.reg_scatter(
labels=False,
coef_box="lower left",
xlab="Bank lending survey, credit demand z-score",
ylab="Equity index future return, vol-targeted at 10%, next month, %",
title="Bank survey credit demand score and subsequent equity returns of developed market basket",
size=(10, 6),
)
```

Predictive correlation has been even a bit stronger between bank lending conditions `BLSCSCORE_NSA`

and subsequent monthly equity index returns.

```
dix = dict_eqbk
dfr = dix["df"]
sig = "BLSCSCORE_NSA"
targ = dix["targ"]
cidx = dix["cidx"]
crx = msp.CategoryRelations(
dfr,
xcats=[sig, targ],
cids=cidx,
freq="M",
lag=1,
xcat_aggs=["last", "sum"],
xcat_trims=[None, None],
)
crx.reg_scatter(
labels=False,
coef_box="lower left",
xlab="Bank lending survey, credit conditions z-score (higher = easier)",
ylab="Equity index future return, vol-targeted at 10%, next month, %",
title="Bank survey credit supply score and subsequent equity returns of developed market basket",
size=(10, 6),
)
```

The table below displays the accuracy of both bank survey signals using standard accuracy metrics:

```
dix = dict_eqbk
dfr = dix["df"]
sig = dix["sig"]
rivs = dix["rivs"]
targ = dix["targ"]
cidx = dix["cidx"]
srr = mss.SignalReturnRelations(
dfr,
cids=cidx,
sig=sig,
rival_sigs=rivs,
sig_neg=False,
ret=targ,
freq="M",
start="1995-01-01",
)
dix["srr"] = srr
dix = dict_eqbk
srrx = dix["srr"]
display(srrx.signals_table().sort_index().astype("float").round(3))
```

accuracy | bal_accuracy | pos_sigr | pos_retr | pos_prec | neg_prec | pearson | pearson_pval | kendall | kendall_pval | |
---|---|---|---|---|---|---|---|---|---|---|

BLSCSCORE_NSA | 0.560 | 0.562 | 0.489 | 0.606 | 0.669 | 0.455 | 0.176 | 0.003 | 0.094 | 0.018 |

BLSDSCORE_NSA | 0.563 | 0.565 | 0.493 | 0.606 | 0.671 | 0.458 | 0.153 | 0.010 | 0.089 | 0.025 |

### Naive PnL#

As before with fixed income return we create naive PnL using bank surveys as signals and equity returns as target. Please see here for details `NaivePnl()`

class

```
dix = dict_eqbk
dfr = dix["df"]
sigx = [dix["sig"]] + dix["rivs"]
targ = dix["targ"]
cidx = dix["cidx"]
naive_pnl = msn.NaivePnL(
dfr,
ret=targ,
sigs=sigx,
cids=cidx,
start="2000-01-01",
# bms=["USD_EQXR_NSA", "USD_GB10YXR_NSA"],
)
dict_pnls = {
"PZN0": {"sig_add": 0, "sig_op": "zn_score_pan"},
"PZN1": {"sig_add": 1, "sig_op": "zn_score_pan"},
"BIN0": {"sig_add": 0, "sig_op": "binary"},
"BIN1": {"sig_add": 1, "sig_op": "binary"},
}
for key, value in dict_pnls.items():
for sig in sigx:
naive_pnl.make_pnl(
sig,
sig_neg=False,
sig_add=value["sig_add"],
sig_op=value["sig_op"],
thresh=3,
rebal_freq="monthly",
vol_scale=10,
rebal_slip=1,
pnl_name=sig + "_" + key,
)
naive_pnl.make_long_pnl(vol_scale=10, label="Long only")
dix["pnls"] = naive_pnl
```

```
dix = dict_eqbk
sigx = [dix["sig"]] + dix["rivs"]
naive_pnl = dix["pnls"]
pnls = [sig + type for type in ["_PZN0", "_BIN0"] for sig in sigx]
naive_pnl.plot_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
title="Naive PnLs for equity index baskets, based on survey scores, no bias",
xcat_labels=[
"based on credit demand score, proportionate",
"based on credit conditions score, proportionate",
"based on credit demand score, binary",
"based on credit conditions score, binary",
],
figsize=(16, 8),
)
```

The below PnLs approximately add up returns of long-only and survey-based positions in equal weights to produce long-biased portfolios.

```
dix = dict_eqbk
sigx = [dix["sig"]] + dix["rivs"]
naive_pnl = dix["pnls"]
pnls = [sig + type for sig in sigx for type in ["_PZN1", "_BIN1"]] + ["Long only"]
naive_pnl.plot_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
title="Naive PnLs for equity index baskets, based on survey scores, long bias",
xcat_labels=[
"based on credit demand score, proportionate",
"based on credit conditions score, proportionate",
"based on credit demand score, binary",
"based on credit conditions score, binary",
"Long only",
],
figsize=(16, 8),
)
```

```
dix = dict_eqbk
sigx = [dix["sig"]] + dix["rivs"]
naive_pnl = dix["pnls"]
pnls = [sig + type for sig in sigx for type in ["_PZN0", "_PZN1", "_BIN0", "_BIN1"]] + ["Long only"]
df_eval = naive_pnl.evaluate_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
start="2000-01-01",
)
display(df_eval.transpose())
```

Return (pct ar) | St. Dev. (pct ar) | Sharpe Ratio | Sortino Ratio | Max 21-day draw | Max 6-month draw | Traded Months | |
---|---|---|---|---|---|---|---|

xcat | |||||||

BLSCSCORE_NSA_BIN0 | 3.408224 | 10.0 | 0.340822 | 0.491622 | -12.027061 | -16.539795 | 285 |

BLSCSCORE_NSA_BIN1 | 6.638977 | 10.0 | 0.663898 | 1.000791 | -16.444191 | -18.73449 | 285 |

BLSCSCORE_NSA_PZN0 | 2.866716 | 10.0 | 0.286672 | 0.420959 | -22.757081 | -28.673342 | 285 |

BLSCSCORE_NSA_PZN1 | 6.450741 | 10.0 | 0.645074 | 0.905152 | -13.889687 | -15.101154 | 285 |

BLSDSCORE_NSA_BIN0 | 4.213133 | 10.0 | 0.421313 | 0.606284 | -12.028474 | -15.902305 | 285 |

BLSDSCORE_NSA_BIN1 | 7.194512 | 10.0 | 0.719451 | 1.082466 | -16.234939 | -18.496094 | 285 |

BLSDSCORE_NSA_PZN0 | 3.396119 | 10.0 | 0.339612 | 0.494071 | -21.040063 | -29.474641 | 285 |

BLSDSCORE_NSA_PZN1 | 6.249208 | 10.0 | 0.624921 | 0.871028 | -17.896865 | -18.687865 | 285 |

Long only | 4.680483 | 10.0 | 0.468048 | 0.63898 | -23.623212 | -20.938006 | 285 |

## Equity versus duration returns#

In the final part of Value checks we look at the relation between bank survey scores and volatility-targeted equity versus duration returns for the developed market basket. The target will be the earlier created difference between equity and duration return, 10% volatility targeted (`EQvDUXR_VT10`

)

```
sigs = bls
ms = "BLSDSCORE_NSA"
os = list(set(sigs) - set([ms])) # other signals
targ = "EQvDUXR_VT10"
cidx = ["GDM", ]
dict_edbk = {
"df": dfx,
"sig": ms,
"rivs": os,
"targ": targ,
"cidx": cidx,
"black": None,
"srr": None,
"pnls": None,
}
dix = dict_edbk
dfr = dix["df"]
sig = dix["sig"]
targ = dix["targ"]
cidx = dix["cidx"]
crx = msp.CategoryRelations(
dfr,
xcats=[sig, targ],
cids=cidx,
freq="M",
lag=1,
xcat_aggs=["last", "sum"],
xcat_trims=[None, None],
)
crx.reg_scatter(
labels=False,
coef_box="lower left",
xlab="Bank lending survey, credit demand z-score",
ylab="Equity versus IRS returns (both vol-targeted), next month, %",
title="Bank survey credit demand and subsequent equity versus IRS returns of developed market basket",
size=(10, 6),
)
```

```
dix = dict_edbk
dfr = dix["df"]
sig = "BLSCSCORE_NSA"
targ = dix["targ"]
cidx = dix["cidx"]
crx = msp.CategoryRelations(
dfr,
xcats=[sig, targ],
cids=cidx,
freq="M",
lag=1,
xcat_aggs=["last", "sum"],
xcat_trims=[None, None],
)
crx.reg_scatter(
labels=False,
coef_box="lower left",
xlab="Bank lending survey, credit conditions z-score (higher = easier)",
ylab="Equity versus IRS returns (both vol-targeted), next month, %",
title="Bank survey credit conditions and subsequent equity versus IRS returns of developed market basket",
size=(10, 6),
)
```

### Accuracy and correlation check#

```
dix = dict_edbk
dfr = dix["df"]
sig = dix["sig"]
rivs = dix["rivs"]
targ = dix["targ"]
cidx = dix["cidx"]
srr = mss.SignalReturnRelations(
dfr,
cids=cidx,
sig=sig,
rival_sigs=rivs,
sig_neg=False,
ret=targ,
freq="M",
start="1995-01-01",
)
dix["srr"] = srr
dix = dict_edbk
srrx = dix["srr"]
display(srrx.signals_table().sort_index().astype("float").round(3))
```

accuracy | bal_accuracy | pos_sigr | pos_retr | pos_prec | neg_prec | pearson | pearson_pval | kendall | kendall_pval | |
---|---|---|---|---|---|---|---|---|---|---|

BLSCSCORE_NSA | 0.56 | 0.561 | 0.489 | 0.542 | 0.604 | 0.517 | 0.146 | 0.014 | 0.082 | 0.040 |

BLSDSCORE_NSA | 0.57 | 0.571 | 0.493 | 0.542 | 0.614 | 0.528 | 0.165 | 0.005 | 0.114 | 0.004 |

```
dix = dict_edbk
srrx = dix["srr"]
display(srrx.signals_table().sort_index().astype("float").round(3))
```

accuracy | bal_accuracy | pos_sigr | pos_retr | pos_prec | neg_prec | pearson | pearson_pval | kendall | kendall_pval | |
---|---|---|---|---|---|---|---|---|---|---|

BLSCSCORE_NSA | 0.56 | 0.561 | 0.489 | 0.542 | 0.604 | 0.517 | 0.146 | 0.014 | 0.082 | 0.040 |

BLSDSCORE_NSA | 0.57 | 0.571 | 0.493 | 0.542 | 0.614 | 0.528 | 0.165 | 0.005 | 0.114 | 0.004 |

### Naive PnL#

```
dix = dict_edbk
dfr = dix["df"]
sigx = [dix["sig"]] + dix["rivs"]
targ = dix["targ"]
cidx = dix["cidx"]
naive_pnl = msn.NaivePnL(
dfr,
ret=targ,
sigs=sigx,
cids=cidx,
start="2000-01-01",
# bms=["USD_EQXR_NSA", "USD_GB10YXR_NSA"],
)
dict_pnls = {
"PZN0": {"sig_add": 0, "sig_op": "zn_score_pan"},
"PZN1": {"sig_add": 1, "sig_op": "zn_score_pan"},
"BIN0": {"sig_add": 0, "sig_op": "binary"},
"BIN1": {"sig_add": 1, "sig_op": "binary"},
}
for key, value in dict_pnls.items():
for sig in sigx:
naive_pnl.make_pnl(
sig,
sig_neg=False,
sig_add=value["sig_add"],
sig_op=value["sig_op"],
thresh=3,
rebal_freq="monthly",
vol_scale=10,
rebal_slip=1,
pnl_name=sig + "_" + key,
)
naive_pnl.make_long_pnl(vol_scale=10, label="Long only")
dix["pnls"] = naive_pnl
```

```
dix = dict_edbk
sigx = [dix["sig"]] + dix["rivs"]
naive_pnl = dix["pnls"]
pnls = [sig + type for type in ["_PZN0", "_BIN0"] for sig in sigx]
naive_pnl.plot_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
title="Naive PnLs for equity versus IRS baskets, based on survey scores, no bias",
xcat_labels=[
"based on credit demand score, proportionate",
"based on credit conditions score, proportionate",
"based on credit demand score, binary",
"based on credit conditions score, binary",
],
figsize=(16, 8),
)
```

```
dix = dict_edbk
sigx = [dix["sig"]] + dix["rivs"]
naive_pnl = dix["pnls"]
pnls = [sig + type for sig in sigx for type in ["_PZN1", "_BIN1"]] + ["Long only"]
naive_pnl.plot_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
title="Naive PnLs for equity versus IRS baskets, based on survey scores, long equity bias",
xcat_labels=[
"based on credit demand score, proportionate",
"based on credit conditions score, proportionate",
"based on credit demand score, binary",
"based on credit conditions score, binary",
"Always long equity versus fixed income",
],
figsize=(16, 8),
)
```

```
dix = dict_edbk
sigx = [dix["sig"]] + dix["rivs"]
naive_pnl = dix["pnls"]
pnls = [sig + type for sig in sigx for type in ["_PZN0", "_PZN1", "_BIN0", "_BIN1"]] + ["Long only"]
df_eval = naive_pnl.evaluate_pnls(
pnl_cats=pnls,
pnl_cids=["ALL"],
start="2000-01-01",
)
display(df_eval.transpose())
```

Return (pct ar) | St. Dev. (pct ar) | Sharpe Ratio | Sortino Ratio | Max 21-day draw | Max 6-month draw | Traded Months | |
---|---|---|---|---|---|---|---|

xcat | |||||||

BLSCSCORE_NSA_BIN0 | 2.880194 | 10.0 | 0.288019 | 0.411972 | -13.439853 | -20.513339 | 285 |

BLSCSCORE_NSA_BIN1 | 3.161381 | 10.0 | 0.316138 | 0.476121 | -18.187873 | -27.760273 | 285 |

BLSCSCORE_NSA_PZN0 | 3.59825 | 10.0 | 0.359825 | 0.523648 | -17.64478 | -29.339077 | 285 |

BLSCSCORE_NSA_PZN1 | 4.135961 | 10.0 | 0.413596 | 0.579669 | -15.633289 | -21.769172 | 285 |

BLSDSCORE_NSA_BIN0 | 4.988134 | 10.0 | 0.498813 | 0.719488 | -13.444122 | -20.519854 | 285 |

BLSDSCORE_NSA_BIN1 | 4.771137 | 10.0 | 0.477114 | 0.725011 | -17.923569 | -27.356865 | 285 |

BLSDSCORE_NSA_PZN0 | 4.913003 | 10.0 | 0.4913 | 0.726846 | -16.494524 | -31.19248 | 285 |

BLSDSCORE_NSA_PZN1 | 4.612178 | 10.0 | 0.461218 | 0.655218 | -19.2194 | -24.757107 | 285 |

Long only | 1.100752 | 10.0 | 0.110075 | 0.151329 | -19.840278 | -24.840348 | 285 |