Thursday, February 29, 2024

Applying Predictive Modeling to Crypto Futures Trading

Tradery Labs
I recently had the pleasure of doing some advisory and coaching work with a startup called Tradery Labs.

Tradery Labs is bringing futuristic predictive-modeling techniques into a highly honed system that will democratize the use of predictive algorithmic trading. The company’s goal is to give an investor the tools needed to build and test their own algorithms without the need for data scientists and programmers. More on that in a future post.

I recently sat down with Tradery’s head of modeling, Angel Aponte, to talk shop about his latest models in crypto futures.

Some Background on Bitcoin Futures
Unlike stocks where you can “sell short” and bet against the value of a stock, there is no concept of “selling short” actual bitcoin, you can only buy it or not hold it. The futures market for bitcoin changed all of that in 2017 and enabled traders to financially take a positive or negative position on bitcoin. These bitcoin futures enable traders to align their investment with their view on where bitcoin is headed. That is, traders sell futures when they expect bitcoin to decline and traders buy futures when they expect bitcoin to increase.

What are the goals of latest Tradery Labs algorithm?
Tradery has some lofty goals: the current algorithm targets a 50% annual return on investment and tries to achieve the following objectives:

  1. Beat a buy and hold on the base asset

  2. Have more winning months than losing months

  3. The largest winning month needs to be bigger than the largest losing month

  4. Worst case drawdown (singular decline in value) of 20%

  5. Make money when the market goes up and when it goes down

  6. Product profit overall

Like all predictive model builders, Tradery is in continuous improvement mode. The model never gets to perfection. In financial markets, this is especially important because the volatility of markets creates opportunities for new trends with new causes to develop that new information can be used to retrain a predictive model to improve its accuracy.

Testing Models
Predictive modelers always test different approaches to achieve their objectives. Tradery tested both statistical and deep learning methods for this latest project. Statistical techniques use mathematical models to predict outcomes, while deep learning methods use an algorithm to learn from the available data to make predictions. Both techniques have a rich ecosystem of free software libraries that enable flexible model building. The key is to have reliable, reproducible tests, that you can iterate over quickly, and then validate those results in the real world. All strategies that test successfully need to be followed by months of real-world results, before trading them live with real money at stake.

Testing does not necessarily produce clear cut winning models. Modern techniques are so advanced that the top models are usually comparable. Tradery finds that models vary in deciding when to put capital at risk and then de-risk (ie, buying and selling). Some techniques do better in uptrends, some do better in downtrends, and some perform best when the market trading in a range (i.e., generally moving sideways). Over a period of time, these are the only three options that a market can be in, which makes choosing on a small differences between models challenging. In this exercise, Tradery’s winning model is based on statistical techniques, not deep learning, which might be a surprise to people.

Model Performance
Tradery backtests its model using historical data that the model has never seen before to see how well it does in its predictions. This means that the model gets tested using data it has never seen before and its ability to make predictions is repeatedly tested to rate how well it will perform in real life.

The picture below shows how the model has done in backtesting. You can see that the model performed very well predicting the outcomes using data it had never seen before. Green bubbles and dotted lines means good, while read means bad.

Notice the outsized winning trades and the lack of outsized losers at the top of the picture. Those are the upward pointing green arrow heads. There are two in particular that are well above the mixed green and red arrows that are in a tight range. Those two arrow heads show two trades that drove the big uptrends in performance.

In the lower portion of the picture, note the positive returns in green for market movements going up and down. This means the model is picking the right position based on expectations of increasing and decreasing prices.

Also, note that the model positioned the trades incorrectly in a sideways market as shown in red on the left. You can see the red dotted line (zoom in) which shows losses in sideways market movement.

In this data set, the model was not wrong about any big swings, which is why there are no outsized losing trades, as stated earlier. However, this particular model did not make winning decisions when the market moved sideways.

Salvatore Tirabassi

Findings and Takeaways

Angel Aponte provided some insights into important observations from the process.

The market adapts and evolves over time.

Therefore, a model’s performance will degrade over time. In addition, more traders are coming into these emerging futures markets and those new entrants create dynamics that change rapidly. As a result, the team must continually test new models and retrain existing ones.

Another important finding indicates that faster and more frequent trading is not necessarily better.

One might think that high velocity trading is a natural outcome of these kinds of models, but the trading signal that the model seeks can get noisy in short intervals making quick decisions unreliable.

In addition, the cost of commissions is an important factor when trading algorithmically. You can have a highly accurate model that loses money on the trading commissions, so including that cost in back tests is important. This is another costs that works against high-velocity trading. A model needs to cover its transaction costs to be successful.

I will report back on live trading results using these algorithms in sometime in the future.

Visit Tradery Labs here.

Referencehttps://shorturl.at/jpyQ7

 

Wednesday, February 21, 2024

How We Replaced an Implementation of Workday Adaptive Planning Enterprise Management with Microsoft’s PowerBI Tailored for FP&A Reporting

Excel’s powerful capabilities, integrations and flexibility make it a favored tool for all financial and accounting professionals. Like many middle market companies, we considered moving from an Excel dominated financial planning and reporting process to an “enterprise grade” solution. A very difficult decision, we set aside Excel for a unified financial planning tool, also known as Enterprise Planning Management (EPM) systems.

Salvatore Tirabassi

After a review of solutions and recommendations, we decided to move our financial planning to Workday’s Adaptive Planning (WAP). Our financial forecast in Excel is a complete system: It handles recurring revenue waterfalls, consolidations by products and business units, eliminations between business units, balance sheet forecasting, among other complexities. Nevertheless, the transition, despite a good plan on paper, became never ending.

We faced two core problems, which we thought we could overcome. First, the precision and complexity of our Excel forecasting model was hard to replicate in WAP. Second, our lack of deep knowledge in WAP modeling, forced heavy reliance on consultants and a time-consuming iterative process to make any headway.  To minimize the obstacles and make some use of WAP, we paused our forecasting transition efforts and focused on WAP as a reporting tool. We had modest success, but we ended up having a hodge-podge system of exceptions and frequent error checking that was worse than the status quo.

During this failed transition period, the analytics team, which is part of our finance team, dramatically increased its expertise and capabilities in PowerBI. (While I am going to focus on PowerBI, I encourage the finance pros reading this to think about this solution using whatever business intelligence platform that is available. This should work with any BI platform.) PowerBI’s integrations with Excel and our accounting system (Microsoft NAV) provided the light-bulb moment for moving forward with an in-house automated financial reporting system connecting our Excel forecasts to accounting results and producing polished reporting in real-time.

In order to get there, we assigned a skilled data analyst to work directly with accounting and FP&A to create an ETL (extract, transform and load) template in PowerBI that could take our GL-coded accounting records and match them to financial reports that were business friendly and consistent with our forecasting templates. Here are the key success criteria that made this possible.

  1. Our data analyst had PowerBI, SQL skills needed for the entire buildout.

  2. We were lucky that our data analyst also had solid accounting/finance knowledge to work directly with FP&A and accounting teammates. However, this could have been another team member working in tandem.

  3. The financial reporting templates were already matched to our excel forecasting outputs. This line-for-line matching eliminated the need for another ETL template, but that could have been created if necessary.

  4. Our data analyst spent time mapping GL codes to our financial reporting templates. Without this, the ETL development would have been impossible.

  5. In addition, the data analyst methodically mapped our eliminations entries between subsidiaries and hierarchical entities.

  6. Then, it was time for record matching so that financial reporting template, forecast and GL Codes could be connected in sample data with a clear line of sight to each other.

  7. Finally, the ETL template was ready to be programmed and tested.

  8. PowerBI reporting dashboards were then developed and tested with initial data flows. Here the finance team compared PowerBI financial reports to our previous reports. Checking for accuracy at the line-item, subtotal, and total levels. Any errors were traced all the way back to GL-codes to ensure the fixes could be implemented in the ETL template.'

  9. We then iterated step 8 until multiple periods showed no errors and everything tied out to the most important GL line items such as net income, fixed assets, total revenue, cash balance in every grouping variation we needed (e.g., consolidated, product, business unit, geography, etc.).

The above process took about 120 days to get through Step 8 and then another 60 days (2 reporting cycles) to get through Step 9. All of this was achieved with one resource dedicated to the project and all other FP&A and accounting teammates being on call as needed.

With our financial reporting now published in an automated way, we have dramatically reduced the processing time and eliminated exceptions handling for information flows from accounting to financial reporting. While the EPM also promised financial modeling automations, we never went back to that. Instead, we have improved our Excel-based forecasting models in ways that would be hard to replicate in a new system given the resources we have and the connections of these models to our PowerBI reporting system.

If you are considering an EPM, especially for reporting, it might be worth looking at your existing business intelligence platform for an easier and more manageable solution.

Reference: https://salvatoretirabassi.substack.com/p/how-we-replaced-an-implementation

 

Wednesday, February 14, 2024

Cracks in Consumer Credit Card Delinquency Despite High Cash Balances

On January 22, I posted an article on consumer financial strength driven by the amount of cash consumers have in checkable deposits as reported by the Fed. If you look at the bottom 50% of households by wealth, they are sitting on an astounding 2.5x as much cash in their checking accounts as they had before the start of COVID. See the chart below.

Salvatore Tirabassi

You can see that the amount of cash peaked in September 2022 and has since been declining. The rate of decline though indicates that it will be some time before consumers get back to pre-COVID cash levels.

In January, Transunion reported that more recently issued credit cards are reaching high delinquency rates much earlier than expected. If you are new to consumer finance, we look at how credit performs from the date of issuance (also called a vintage) and that gives you the ability to compare how different issuance dates perform against each other.

Issuance dates closer to hard financial times should underperform the preceding issuance dates.

Let’s look at the Transunion delinquency chart.

Salvatore Tirabassi

This chart shows the percentage of credit cards (as a pool) issued in Q4 of each of the last 5 years to reach 90+ days delinquency (no payments in the last 90+ days). Each recent vintage pool has reached the level of delinquency of the previous vintage pool in a shorter period of time. For example, the orange line (Q4 2018 vintage pool) took 54 months to reach a delinquency rate of about 10%. Now look at the light blue line (Q4 2021 vintage pool). It took only 15 months to reach 10% delinquency. The most recent issuance date, the purple line (Q4 2022 vintage pool), is already at a faster pace than all previous vintage pools. Notice it is steeper than the light blue line that preceded it in Q4 2021.

If consumers are sitting on so much cash, why are credit cards going delinquent at a quickening rate?

There are many factors that could be at play here. Here are some of the drivers that I think are important.

  • The cash balances above reflect the bottom 50% of households, as a group. Hidden in that data are the stratifications of cash by household wealth, which would likely show lower cash savings as you move down to less wealthy households.

  • Similarly, the Transunion data above also groups all credit risk stratifications together. In a stratified view by credit risk wealth tier, you would likely see that the rates and pacing of delinquency will be higher and faster for lower-credit consumers.

  • The positive effects of COVID economics (higher wages, stimulus, new credit availability, savings from stay-at-home orders) are unwinding more quickly for the lower credit consumers.

This part of the market had the opportunity to spend more and put more on credit during COVID, and the banks were eager to bring new credit card accounts on. As the economy has gone back to normal over the last 24 months, these consumers have more regular demands on their cash, which took a backseat during COVID. Moreover, they now have credit card bills to address, which carry interest rates at the highest rates we have seen in years. The combination leads to increasing delinquencies, even though cash looks abundant.

 Reference: https://salvatoretirabassi.substack.com/p/cracks-in-consumer-credit-card-delinquency

Tuesday, February 6, 2024

Consumer Credit Card Interest Savings in a Decreasing Rate Environment

The stock market took some swings last week. It was down hard on January 31st on fears of no rate relief from the Fed and then rebounded firmly the very next day. An emotional roller coaster for many, to be sure. In this post, I am going to look at the upcoming rate environment but focus in on consumer debt and the potential savings consumers will experience as rates decline.

While the stock market will fluctuate wildly based on changing sentiments about interest rates over the course of 2024, consumer borrowers will be affected in a direct and consistent way because most consumer debt products are pegged to the Fed Funds rate. I am not going to focus on mortgages because much of the consumer housing market is carrying historically low rates into this environment already. As a result, decreasing rates will benefit new borrowers, not those who closed or refinanced their mortgages before 2022, which represents most mortgage holders in the US. Instead, I want to focus on total consumer debt and the fluctuating interest rates in credit cards which have rates priced as a function of the Prime Rate, which is a function of the Fed Funds rate.

The Fed has pointed to as many as three rate cuts of 25 basis points each. With the Fed Funds rate at 5.5%, this would bring down the rate to 4.75% at some point during 2024. The CME’s (Chicago Mercantile Exchange’s) Fedwatch indicator points to six cuts of 25 basis points during 2024 – 1.5% in total. This would bring the rate down to 4%. Bookending the two estimates, the Fed Funds rate will end up somewhere between 4% and 4.75% barring any unforeseen events that could change that.

Where the Fed ends up, will depend partially on the strong wage growth due to the low unemployment rate which is currently 3.7%, putting it below 4.0% for 23 months in a row. Because unemployment has been sustainably low, wages have benefited and real disposable income on a per capita basis has increased 3.7% year-over-year, which is much, much higher than the 1.7% averaged before the previous nine recessions since 1959. With that said, inflation has come into a more manageable range for the Fed, so the strength of the consumer, in my view, is less likely to cause inflation to rebound and is more important in avoiding a recession.

Now, with respect to consumer disposable income, interest expense has taken a some of the positive gains in wages. The chart below shows how total consumer debt has grown through Q3 of 2023. Total consumer debt in the United States is approaching levels not seen since the housing bubble highs before the 2008 crisis.

However, debt service remains well below the highs. This was caused by the massive deleveraging after 2008, where many debts carried high interest rates, and then re-leveraging into a low-rate environment that was sustained by the housing recession and then by the COVID actions taken by the Fed. So overall consumers have borrowed consistently since 2013 at very low rates, which fuels their ability to spend. You can see in the chart below, that during COVID (the absolute bottom of the line chart) debt service reached an impressive low point but even with the rebound in interest payments caused by all the recent Fed increases, debt service payments are still at a relative low point. In fact, debt service as a percentage of disposable income as of the middle of 2023 was lower than any time since 1981, excluding the pandemic period.

While the information above is about 3-9 months old (the Fed is slow with its data), the charts probably have not moved materially since they were published, in my opinion. So, taken at face value, decreasing interest rates in 2024, will significantly help the consumer.

Right now, US consumers are at near all-time lows in debt service with the highest interest rates highs that they have seen in 17 years. However, US consumers are reaching their highest accumulated debt in 17 years. Thanks to historically low rates, the US consumers have locked in super low mortgage rates, which are predominantly fixed rates and has allowed these seemingly contradictory facts to co-exist.

Back to credit cards. The variable debts US consumers hold in credit cards are what will drive interest expense fluctuations, and much of the increases in debt service are related to rising credit card interest rates. In fact, since the Fed started raising interest rates, credit card average interest rates have risen from 15% to 21.5%. This trend will reverse with the Fed rate cuts and US consumers will feel less payment pressure on credit cards. We can see credit card interest rates reverse to below 20% in the foreseeable future – still high but providing immediate savings to consumers.

I estimate that the savings in interest they might experience should range between $100 to $200 per household per year. This can continue to grow if rates decline further in the future.  $100-$200 per household per year may not sound like a lot. But if you put in the context of spending events, it becomes more meaningful. For example, it’s a few extra dinners out for the average American household that they previously couldn’t afford.

Reference: https://shorturl.at/uADX4

 

Unveiling the Enigma: Contrasting Consumer Cash Reserves with Escalating Credit Card Delinquencies

 

A recent analysis sheds light on the intriguing interplay between burgeoning consumer cash reserves and the surprising surge in credit card delinquencies. Despite the Federal Reserve’s reports revealing a remarkable 2.5x increase in cash holdings for the bottom 50% of households, a deeper dive into Transunion’s credit data exposes an unexpected trend in delinquency rates among recently issued credit cards.

Visualizing the Cash Peak:

Illustrating the ascent and descent of consumer cash, a Federal Reserve chart showcases the savings rate versus currency and checkable deposits for the top 50% of U.S. households. Notably, cash holdings peaked in September 2022, and while a decline is underway, the rate of decrease suggests a prolonged period before reaching pre-COVID levels.

Delving into Credit Card Delinquencies:

Contrary to expectations, Transunion’s October 2023 report unravels a concerning pattern in credit card delinquencies, particularly among recent issuances. The analysis, organized by vintage (issuance date), indicates a noteworthy acceleration in delinquency rates over shorter periods. Notably, the most recent vintage (Q4 2022) surpasses the pace of all its predecessors.

Savings Rate vs. Currency and Checkable Deposits Top 50% of US Households. Source: Federal Reserve, 2023

Analyzing the Discrepancy:

The conundrum arises – why are credit cards experiencing escalating delinquencies despite consumers holding substantial cash reserves? Several factors contribute to this apparent paradox. The Federal Reserve’s data, reflecting the bottom 50% of households, conceals the nuanced distribution of cash by wealth. A deeper dive into Transunion’s data suggests that stratifying credit risk tiers may unveil higher and faster delinquency rates among lower-grade credit consumers.

Unwinding Positive Effects of COVID Economics:

The favorable effects of COVID-related economic stimuli, such as increased wages, stimulus packages, newfound credit availability, and savings from stay-at-home orders, are now unwinding for lower credit tiers. This segment had the opportunity to spend more and accumulate debt during the pandemic, with banks readily extending new credit accounts. However, as the economy reverts to normalcy, these consumers face regular demands on their cash, leading to a resurgence of credit card bills with historically high-interest rates. Consequently, the combination of heightened financial commitments and mounting credit card debt is fueling a surge in delinquencies, despite the apparent abundance of cash.

In unraveling this financial paradox, it becomes evident that the intricate dynamics of consumer behavior and economic shifts necessitate a comprehensive understanding for stakeholders in the financial landscape.

Reference: https://shorturl.at/puDY7

 

Monday, February 5, 2024

Decoding Consumer Balance Sheets: A Deeper Dive Beyond Savings Rates

 

Navigating the landscape of consumer finance, especially in the realm of excessive debt, prompts questions about the financial robustness of consumers and its potential impact on economic trends. In the post-COVID era, media discussions often revolve around the consumer savings rate, a metric influenced by stimulus measures and changing consumption patterns. However, a recent revelation, supported by alternative data points, challenges conventional perspectives on consumer finances. This analysis delves into the nuances of consumer balance sheets, exploring the interplay between savings rates and the substantial cash build-up in checking accounts.

Alternative Data Insights:

While the savings rate serves as a valuable indicator, it falls short in revealing the depth of cash accumulation. Contrary to widely reported savings rates, a closer look at the Federal Reserve’s Currency and Checkable Deposits data uncovers a more robust and sustainable financial position for the average US consumer. Comparing the cash availability evolution for the Bottom 50% and Top 50% of households reveals a significant uptick, with the former experiencing a 2.5x increase since January 2020 and the latter boasting a more substantial 3.5x surge.

Savings Rate vs. Currency and Checkable Deposits Bottom 50% of US Households

Charting the Course:

The provided charts depict the evolution of Currency and Checkable Deposits for both household groups. Notably, both segments began utilizing their accumulated cash, with the Bottom 50% initiating consumption in June 2022 and the Top 50% following suit in October 2022.

Average Consumer Balance Sheets:

Analyzing these data points underscores the resilience of the average US consumer balance sheets, with ample cash reserves and a prolonged trajectory before returning to pre-COVID levels. However, the sustainability of these balances varies by wealth decile, with wealthier households demonstrating a more protracted cash preservation period.

Erosion of Cash and Wealth Disparities:

It is crucial to acknowledge that these observations represent averages across all households, and the erosion of cash will likely manifest from the bottom up. The bottom 50% has already experienced negative growth, contrasting with the top 50%, signaling potential disparities in the impact of economic shifts. Less affluent households may face recessionary pressures while the broader economy remains relatively stable.

Forecasting Economic Trends:

While consumer balance sheets are not projected to be a significant driver of economic slowdown in the short term, factors like hiring trends, wages relative to inflation, and industrial output are expected to play more substantial roles in shaping the economic landscape. The intricate dynamics of wealth distribution and consumer behavior necessitate a comprehensive understanding for accurate forecasting of a recession or a “soft landing” scenario in the coming years.

Reference: https://tirabassi.com/

Sunday, February 4, 2024

Unlocking Synergies: Elevating Data Science with Operations Research Expertise

 **Introduction:**

Who’s on a quest to develop advanced data science capabilities? One of my analytics team’s strategic expansion brought together diverse talents in statistics, applied math, and engineering. This case study explores the integration of operations research, fostering collaboration and knowledge diversity within analytics.

**Objective:**

Our primary goal was to blend diverse skill sets, creating an environment conducive to innovative problem-solving. While the envisioned integration remained a future prospect, immediate focus shifted to operations research for its promising prescriptive capabilities. 

**Operations Research Focus:**

Econometrics was another area if interest for time-series analytics, but operations research, tailored for data science programming and extensive datasets, emerged as a focal point. Excelling in solving objectives within specified constraints, it offered optimal solutions that set it apart from traditional machine learning models.

**Prescriptive Analytics vs. Predictive Analytics:**

Distinguish prescriptive analytics (operations research) from predictive analytics (machine learning). The former provides optimal solutions based on defined constraints, while the latter predicts outcomes based on historical data.

**Transportation Example:**

In a transportation scenario, predictive models analyze historical data for efficient routes. Operations research, however, prescriptively determines the least-cost path based on constraints, suggesting routes not traveled before. One other aspect of operations research and linear programming models is that they also handle revenue and expense variables quite well.

**Methodology and Insight:**

While both approaches may lead to similar conclusions, their methodologies diverge significantly. Predictive models embrace uncertainty, offering likely outcomes, while operations research precisely calculates optimal solutions, evaluating all possible choices.

**Data Science Synergy:**

Understanding this nuanced difference empowers an analyst or data scientist to approach problem-solving flexibly. Predictive models shine in uncertainty, providing choices based on learned experiences. Operations research excels with known inputs and complex combinations, delivering reliable solutions.

**Conclusion and Future Prospects:**

This case study illuminates the ongoing journey in cultivating a collaborative data science environment. As the capabilities of a team evolve, the prospect of adding talents like the previously mentioned econometrics, which excels in time-series forecasting, holds the promise of elevating capabilities to tackle even more complex challenges. Unleash the potential of data science synergy with operations research expertise! 

 🌐📈 #DataScience #OperationsResearch #AnalyticsSynergy #PrescriptiveAnalytics #PredictiveAnalytics #CaseStudy

 Reference:- http://tirabassi.com/

Friday, February 2, 2024

Unlocking Value Creation: The Power of Lifetime Customer Value in Operational Execution

You might see it in various places as CLV (Customer Lifetime Value) or LTV (Lifetime Value). Lifetime Customer Value, or LCV, is what I call this metric. Fairly interchangeable in my experience, people who use these metrics regularly will know what you mean when you refer to any one of them. LCV’s compact measurement of the value of an individual customer unit is powerful. It’s something that I have been using for years and would like to share some insights into it.

Conceptually, LCV can be calculated as the net value of a customer over their lifetime. In theory, if you calculate it to a net value including allocated overhead costs for every single customer and then summed all of those individual values up, you should be very close or equal to the enterprise value of a business calculated in a discounted cash flow. These two things (Sum of LCV and DCF EV) equate because the LCV is basically taking the net present value of each individual customer’s cash flow, and when summed up, it should equal the net present value of the company’s cash flow. As mentioned, this assumes you calculate LCV on a net basis with accurate allocation of overheads.

If you look at it from the other direction, you could say that LCV equals EV divided by all active customers. Cable television companies often use (at least I think they still do) the EV divided by active customers to derive a value per subscriber metric for valuation purposes. This is a very easy way to examine the relative strength of the individual subscribers across companies by comparing the relative value per subscriber between companies.

The importance of these equalities is that LCV, as an operational metric usable in all areas of the organization, is tied to value creation for the entire business. If marketing pushes Customer Acquisition Cost (CAC) down, then LCV goes up, and the business should gain value. If the cost of goods sold goes up, then LCV goes down, and so does the value of the business. If an organization embraces this metric, they can push shareholder value creation alignment into many corners of a company.

When it comes to LCV, one of the main areas of focus in most cases, I find, is CAC. CAC can be volatile, especially in a world of digital marketing, where competitive forces can turn against you and make marketing very expensive in short and even sustained periods of time. As a result, if you have sticky pricing and overall operating expenses over the course of a year, CAC tends to be the part of LCV that causes the most fluctuation.

When it comes to marketing, CAC does this in two main ways. The marketing expense can fluctuate in or out of your favor, which drives LCV up or down. But given that marketing fluctuations can also translate into higher or lower customer acquisition counts, the LCV can compound its impact on the overall valuation – the sum of the LCVs as described above.

The two-by-two below illustrates the concept at a very high level.

Another area of interest is LCV as a contribution calculation before overhead versus LCV as a net calculation after overhead. The table below shows the differences between the two calculations at a high level.

I already touched on the net calculation and how it is connected to EV. The contribution calculation gives you LCV down to the contribution margin level, which is to say LCV-Contribution is the lifetime customer value that can be used to pay all overhead and financing costs of the company. This is particularly useful if the organization has steady overhead costs that don’t increase quickly with the customer base. It gives the business operators a sense of how many customers they can add to the business on a marginal basis profitably. It allows the operators to take aggressive approaches to CAC and Cost of Goods/Services because every customer with some value at the contribution level will drive growth in EV. With that said, such an approach would lower net LCV over time as lower contribution clients would dilute the average LCV.

To sum it up, Lifetime Customer Value (LCV) is a powerful tool that goes beyond just numbers. It tells us the long-term value of each customer and how it connects to the overall value of a business. By paying attention to LCV, companies can make smart decisions that impact everything from marketing costs to the value of the entire business. In the fast-paced world of digital marketing, where things can change quickly, understanding and using LCV gives businesses a reliable way to plan for the future. The simple matrix and the difference between net and contribution calculations show how flexible and useful LCV can be. So, as businesses delve into LCV insights, they can uncover new ways to improve their strategies, build better relationships with customers, and set the stage for lasting success.

Reference: https://shorturl.at/xyPX0

Recurring Revenue Modeling Can Be Tricky, Using Cancellation Curves Can Improve Precision And Results

  In a recent post on recurring revenue financial modeling, I covered some of the main drivers that play a role in the construction of finan...