Time series data often contains underlying trends and patterns that can reveal valuable insights. However, trends can shift over time due to changes in the system being measured. Being able to detect when the trend in a time series changes can enable data analysts to adapt models, update forecasts, and investigate root causes.

Two useful statistical algorithms for detecting trend changes are the Z-Score method and CUSUM (Cumulative Sum Control) charts. In this post, we’ll walkthrough how to implement these techniques in Python.

The Z-Score method looks at the difference between the current value and a baseline value, divided by the standard deviation of the data. When this score crosses predetermined thresholds, it signals a significant change in trend. CUSUM accumulates the deviation between data points and a target value over time. When the cumulative sum exceeds set limits, it indicates the process mean has shifted.

We’ll work through coding up these algorithms and applying them to synthetic data examples. We’ll also discuss practical considerations like choosing parameters, visualizing results, and dealing with noisy data. The implementations will leverage common Python data science libraries like Pandas, NumPy and MatPlotLib.

By the end, you’ll have a solid understanding of these statistical methods for change point detection and be able to start applying them to identify important shifts in your own time series datasets.

The full walkthrough code with explanations is available in the attached Jupyter notebook. Let me know if you have any other questions!