Position Cost DistributionThe Position Cost Distribution indicator (also known as the Market Position Overview, Chip Distribution, or CYQ Algorithm) provides an estimate of how shares are distributed across different price levels. Visually, it resembles the Volume Profile indicator, though they rely on distinct computational approaches.
🟠 Principle
The Position Cost Distribution algorithm is based on the principle that a security's total shares outstanding usually remains constant, except under conditions like stock splits, reverse splits, or new share issuance. It views all trading activity as simply exchanging share positions between holders at different price points.
By analyzing daily trade volume and the prior day's distribution, the algorithm infers the resulting share distribution after each day. By tracking these inferred transpositions over time, the indicator builds up an aggregate view of the estimated share concentration at each price level. This provides insights into potential buying and selling pressure zones that could form support or resistance areas.
Together with the Volume Profile, the Position Cost Distribution gives traders multiple lenses for examining market structure from both a volume and positional standpoint. Both can help identify meaningful technical price levels.
🟠 Algorithm
The algorithm initializes by allocating all shares to the price range encompassed by the first bar displayed on the chart. Preferably, the chart window should include the stock's IPO date, allowing the model to distribute shares specifically to the IPO price.
For subsequent trading sessions, the indicator performs the following calculations:
1. The daily turnover ratio is calculated by dividing the bar's trading volume by total outstanding shares.
2. For each price level (bucket), the number of shares is reduced by the turnover amount to represent shares transferring from existing holders.
3. The bar's total volume is then added to buckets corresponding to that period's price range.
Currently, the model assumes each share has an equal probability of being exchanged, regardless of how long ago it was acquired or at what price. Potential optimizations could incorporate factors like making shares held longer face a smaller chance of transfer compared to more recently purchased shares.
────────────────────────────────────────────
中文介绍:该指标为“筹码分布”的一个 TradingView 实现 :)
指标和策略
[Excalibur] Ehlers AutoCorrelation Periodogram ModifiedKeep your coins folks, I don't need them, don't want them. If you wish be generous, I do hope that charitable peoples worldwide with surplus food stocks may consider stocking local food banks before stuffing monetary bank vaults, for the crusade of remedying the needs of less than fortunate children, parents, elderly, homeless veterans, and everyone else who deserves nutritional sustenance for the soul.
DEDICATION:
This script is dedicated to the memory of Nikolai Dmitriyevich Kondratiev (Никола́й Дми́триевич Кондра́тьев) as tribute for being a pioneering economist and statistician, paving the way for modern econometrics by advocation of rigorous and empirical methodologies. One of his most substantial contributions to the study of business cycle theory include a revolutionary hypothesis recognizing the existence of dynamic cycle-like phenomenon inherent to economies that are characterized by distinct phases of expansion, stagnation, recession and recovery, what we now know as "Kondratiev Waves" (K-waves). Kondratiev was one of the first economists to recognize the vital significance of applying quantitative analysis on empirical data to evaluate economic dynamics by means of statistical methods. His understanding was that conceptual models alone were insufficient to adequately interpret real-world economic conditions, and that sophisticated analysis was necessary to better comprehend the nature of trending/cycling economic behaviors. Additionally, he recognized prosperous economic cycles were predominantly driven by a combination of technological innovations and infrastructure investments that resulted in profound implications for economic growth and development.
I will mention this... nation's economies MUST be supported and defended to continuously evolve incrementally in order to flourish in perpetuity OR suffer through eras with lasting ramifications of societal stagnation and implosion.
Analogous to the realm of economics, aperiodic cycles/frequencies, both enduring and ephemeral, do exist in all facets of life, every second of every day. To name a few that any blind man can naturally see are: heartbeat (cardiac cycles), respiration rates, circadian rhythms of sleep, powerful magnetic solar cycles, seasonal cycles, lunar cycles, weather patterns, vegetative growth cycles, and ocean waves. Do not pretend for one second that these basic aforementioned examples do not affect business cycle fluctuations in minuscule and monumental ways hour to hour, day to day, season to season, year to year, and decade to decade in every nation on the planet. Kondratiev's original seminal theories in macroeconomics from nearly a century ago have proven remarkably prescient with many of his antiquated elementary observations/notions/hypotheses in macroeconomics being scholastically studied and topically researched further. Therefore, I am compelled to honor and recognize his statistical insight and foresight.
If only.. Kondratiev could hold a pocket sized computer in the cup of both hands bearing the TradingView logo and platform services, I truly believe he would be amazed in marvelous delight with a GARGANTUAN smile on his face.
INTRODUCTION:
Firstly, this is NOT technically speaking an indicator like most others. I would describe it as an advanced cycle period detector to obtain market data spectral estimates with low latency and moderate frequency resolution. Developers can take advantage of this detector by creating scripts that utilize a "Dominant Cycle Source" input to adaptively govern algorithms. Be forewarned, I would only recommend this for advanced developers, not novice code dabbling. Although, there is some Pine wizardry introduced here for novice Pine enthusiasts to witness and learn from. AI did describe the code into one super-crunched sentence as, "a rare feat of exceptionally formatted code masterfully balancing visual clarity, precision, and complexity to provide immense educational value for both programming newcomers and expert Pine coders alike."
Understand all of the above aforementioned? Buckle up and proceed for a lengthy read of verbose complexity...
This is my enhanced and heavily modified version of autocorrelation periodogram (ACP) for Pine Script v5.0. It was originally devised by the mathemagician John Ehlers for detecting dominant cycles (frequencies) in an asset's price action. I have been sitting on code similar to this for a long time, but I decided to unleash the advanced code with my fashion. Originally Ehlers released this with multiple versions, one in a 2016 TASC article and the other in his last published 2013 book "Cycle Analytics for Traders", chapter 8. He wasn't joking about "concepts of advanced technical trading" and ACP is nowhere near to his most intimidating and ingenious calculations in code. I will say the book goes into many finer details about the original periodogram, so if you wish to delve into even more elaborate info regarding Ehlers' original ACP form AND how you may adapt algorithms, you'll have to obtain one. Note to reader, comparing Ehlers' original code to my chimeric code embracing the "Power of Pine", you will notice they have little resemblance.
What you see is a new species of autocorrelation periodogram combining Ehlers' innovation with my fascinations of what ACP could be in a Pine package. One other intention of this script's code is to pay homage to Ehlers' lifelong works. Like Kondratiev, Ehlers is also a hardcore cycle enthusiast. I intend to carry on the fire Ehlers envisioned and I believe that is literally displayed here as a pleasant "fiery" example endowed with Pine. With that said, I tried to make the code as computationally efficient as possible, without going into dozens of more crazy lines of code to speed things up even more. There's also a few creative modifications I made by making alterations to the originating formulas that I felt were improvements, one of them being lag reduction. By recently questioning every single thing I thought I knew about ACP, combined with the accumulation of my current knowledge base, this is the innovative revision I came up with. I could have improved it more but decided not to mind thrash too many TV members, maybe later...
I am now confident Pine should have adequate overhead left over to attach various indicators to the dominant cycle via input.source(). TV, I apologize in advance if in the future a server cluster combusts into a raging inferno... Coders, be fully prepared to build entire algorithms from pure raw code, because not all of the built-in Pine functions fully support dynamic periods (e.g. length=ANYTHING). Many of them do, as this was requested and granted a while ago, but some functions are just inherently finicky due to implementation combinations and MUST be emulated via raw code. I would imagine some comprehensive library or numerous authored scripts have portions of raw code for Pine built-ins some where on TV if you look diligently enough.
Notice: Unfortunately, I will not provide any integration support into member's projects at all. I have my own projects that require way too much of my day already. While I was refactoring my life (forgoing many other "important" endeavors) in the early half of 2023, I primarily focused on this code over and over in my surplus time. During that same time I was working on other innovations that are far above and beyond what this code is. I hope you understand.
The best way programmatically may be to incorporate this code into your private Pine project directly, after brutal testing of course, but that may be too challenging for many in early development. Being able to see the periodogram is also beneficial, so input sourcing may be the "better" avenue to tether portions of the dominant cycle to algorithms. Unique indication being able to utilize the dominantCycle may be advantageous when tethering this script to those algorithms. The easiest way is to manually set your indicators to what ACP recognizes as the dominant cycle, but that's actually not considered dynamic real time adaption of an indicator. Different indicators may need a proportion of the dominantCycle, say half it's value, while others may need the full value of it. That's up to you to figure that out in practice. Sourcing one or more custom indicators dynamically to one detector's dominantCycle may require code like this: `int sourceDC = int(math.max(6, math.min(49, input.source(close, "Dominant Cycle Source"))))`. Keep in mind, some algos can use a float, while algos with a for loop require an integer.
I have witnessed a few attempts by talented TV members for a Pine based autocorrelation periodogram, but not in this caliber. Trust me, coding ACP is no ordinary task to accomplish in Pine and modifying it blessed with applicable improvements is even more challenging. For over 4 years, I have been slowly improving this code here and there randomly. It is beautiful just like a real flame, but... this one can still burn you! My mind was fried to charcoal black a few times wrestling with it in the distant past. My very first attempt at translating ACP was a month long endeavor because PSv3 simply didn't have arrays back then. Anyways, this is ACP with a newer engine, I hope you enjoy it. Any TV subscriber can utilize this code as they please. If you are capable of sufficiently using it properly, please use it wisely with intended good will. That is all I beg of you.
Lastly, you now see how I have rasterized my Pine with Ehlers' swami-like tech. Yep, this whole time I have been using hline() since PSv3, not plot(). Evidently, plot() still has a deficiency limited to only 32 plots when it comes to creating intense eye candy indicators, the last I checked. The use of hline() is the optimal choice for rasterizing Ehlers styled heatmaps. This does only contain two color schemes of the many I have formerly created, but that's all that is essentially needed for this gizmo. Anything else is generally for a spectacle or seeing how brutal Pine can be color treated. The real hurdle is being able to manipulate colors dynamically with Merlin like capabilities from multiple algo results. That's the true challenging part of these heatmap contraptions to obtain multi-colored "predator vision" level indication. You now have basic hline() food for thought empowerment to wield as you can imaginatively dream in Pine projects.
PERIODOGRAM UTILITY IN REAL WORLD SCENARIOS:
This code is a testament to the abilities that have yet to be fully realized with indication advancements. Periodograms, spectrograms, and heatmaps are a powerful tool with real-world applications in various fields such as financial markets, electrical engineering, astronomy, seismology, and neuro/medical applications. For instance, among these diverse fields, it may help traders and investors identify market cycles/periodicities in financial markets, support engineers in optimizing electrical or acoustic systems, aid astronomers in understanding celestial object attributes, assist seismologists with predicting earthquake risks, help medical researchers with neurological disorder identification, and detection of asymptomatic cardiovascular clotting in the vaxxed via full body thermography. In either field of study, technologies in likeness to periodograms may very well provide us with a better sliver of analysis beyond what was ever formerly invented. Periodograms can identify dominant cycles and frequency components in data, which may provide valuable insights and possibly provide better-informed decisions. By utilizing periodograms within aspects of market analytics, individuals and organizations can potentially refrain from making blinded decisions and leverage data-driven insights instead.
PERIODOGRAM INTERPRETATION:
The periodogram renders the power spectrum of a signal, with the y-axis representing the periodicity (frequencies/wavelengths) and the x-axis representing time. The y-axis is divided into periods, with each elevation representing a period. In this periodogram, the y-axis ranges from 6 at the very bottom to 49 at the top, with intermediate values in between, all indicating the power of the corresponding frequency component by color. The higher the position occurs on the y-axis, the longer the period or lower the frequency. The x-axis of the periodogram represents time and is divided into equal intervals, with each vertical column on the axis corresponding to the time interval when the signal was measured. The most recent values/colors are on the right side.
The intensity of the colors on the periodogram indicate the power level of the corresponding frequency or period. The fire color scheme is distinctly like the heat intensity from any casual flame witnessed in a small fire from a lighter, match, or camp fire. The most intense power would be indicated by the brightest of yellow, while the lowest power would be indicated by the darkest shade of red or just black. By analyzing the pattern of colors across different periods, one may gain insights into the dominant frequency components of the signal and visually identify recurring cycles/patterns of periodicity.
SETTINGS CONFIGURATIONS BRIEFLY EXPLAINED:
Source Options: These settings allow you to choose the data source for the analysis. Using the `Source` selection, you may tether to additional data streams (e.g. close, hlcc4, hl2), which also may include samples from any other indicator. For example, this could be my "Chirped Sine Wave Generator" script found in my member profile. By using the `SineWave` selection, you may analyze a theoretical sinusoidal wave with a user-defined period, something already incorporated into the code. The `SineWave` will be displayed over top of the periodogram.
Roofing Filter Options: These inputs control the range of the passband for ACP to analyze. Ehlers had two versions of his highpass filters for his releases, so I included an option for you to see the obvious difference when performing a comparison of both. You may choose between 1st and 2nd order high-pass filters.
Spectral Controls: These settings control the core functionality of the spectral analysis results. You can adjust the autocorrelation lag, adjust the level of smoothing for Fourier coefficients, and control the contrast/behavior of the heatmap displaying the power spectra. I provided two color schemes by checking or unchecking a checkbox.
Dominant Cycle Options: These settings allow you to customize the various types of dominant cycle values. You can choose between floating-point and integer values, and select the rounding method used to derive the final dominantCycle values. Also, you may control the level of smoothing applied to the dominant cycle values.
DOMINANT CYCLE VALUE SELECTIONS:
External to the acs() function, the code takes a dominant cycle value returned from acs() and changes its numeric form based on a specified type and form chosen within the indicator settings. The dominant cycle value can be represented as an integer or a decimal number, depending on the attached algorithm's requirements. For example, FIR filters will require an integer while many IIR filters can use a float. The float forms can be either rounded, smoothed, or floored. If the resulting value is desired to be an integer, it can be rounded up/down or just be in an integer form, depending on how your algorithm may utilize it.
AUTOCORRELATION SPECTRUM FUNCTION BASICALLY EXPLAINED:
In the beginning of the acs() code, the population of caches for precalculated angular frequency factors and smoothing coefficients occur. By precalculating these factors/coefs only once and then storing them in an array, the indicator can save time and computational resources when performing subsequent calculations that require them later.
In the following code block, the "Calculate AutoCorrelations" is calculated for each period within the passband width. The calculation involves numerous summations of values extracted from the roofing filter. Finally, a correlation values array is populated with the resulting values, which are normalized correlation coefficients.
Moving on to the next block of code, labeled "Decompose Fourier Components", Fourier decomposition is performed on the autocorrelation coefficients. It iterates this time through the applicable period range of 6 to 49, calculating the real and imaginary parts of the Fourier components. Frequencies 6 to 49 are the primary focus of interest for this periodogram. Using the precalculated angular frequency factors, the resulting real and imaginary parts are then utilized to calculate the spectral Fourier components, which are stored in an array for later use.
The next section of code smooths the noise ridden Fourier components between the periods of 6 and 49 with a selected filter. This species also employs numerous SuperSmoothers to condition noisy Fourier components. One of the big differences is Ehlers' versions used basic EMAs in this section of code. I decided to add SuperSmoothers.
The final sections of the acs() code determines the peak power component for normalization and then computes the dominant cycle period from the smoothed Fourier components. It first identifies a single spectral component with the highest power value and then assigns it as the peak power. Next, it normalizes the spectral components using the peak power value as a denominator. It then calculates the average dominant cycle period from the normalized spectral components using Ehlers' "Center of Gravity" calculation. Finally, the function returns the dominant cycle period along with the normalized spectral components for later external use to plot the periodogram.
POST SCRIPT:
Concluding, I have to acknowledge a newly found analyst for assistance that I couldn't receive from anywhere else. For one, Claude doesn't know much about Pine, is unfortunately color blind, and can't even see the Pine reference, but it was able to intuitively shred my code with laser precise realizations. Not only that, formulating and reformulating my description needed crucial finesse applied to it, and I couldn't have provided what you have read here without that artificial insight. Finding the right order of words to convey the complexity of ACP and the elaborate accompanying content was a daunting task. No code in my life has ever absorbed so much time and hard fricking work, than what you witness here, an ACP gem cut pristinely. I'm unveiling my version of ACP for an empowering cause, in the hopes a future global army of code wielders will tether it to highly functional computational contraptions they might possess. Here is ACP fully blessed poetically with the "Power of Pine" in sublime code. ENJOY!
Support & Resistance AI (K means/median) [ThinkLogicAI]█ OVERVIEW
K-means is a clustering algorithm commonly used in machine learning to group data points into distinct clusters based on their similarities. While K-means is not typically used directly for identifying support and resistance levels in financial markets, it can serve as a tool in a broader analysis approach.
Support and resistance levels are price levels in financial markets where the price tends to react or reverse. Support is a level where the price tends to stop falling and might start to rise, while resistance is a level where the price tends to stop rising and might start to fall. Traders and analysts often look for these levels as they can provide insights into potential price movements and trading opportunities.
█ BACKGROUND
The K-means algorithm has been around since the late 1950s, making it more than six decades old. The algorithm was introduced by Stuart Lloyd in his 1957 research paper "Least squares quantization in PCM" for telecommunications applications. However, it wasn't widely known or recognized until James MacQueen's 1967 paper "Some Methods for Classification and Analysis of Multivariate Observations," where he formalized the algorithm and referred to it as the "K-means" clustering method.
So, while K-means has been around for a considerable amount of time, it continues to be a widely used and influential algorithm in the fields of machine learning, data analysis, and pattern recognition due to its simplicity and effectiveness in clustering tasks.
█ COMPARE AND CONTRAST SUPPORT AND RESISTANCE METHODS
1) K-means Approach:
Cluster Formation: After applying the K-means algorithm to historical price change data and visualizing the resulting clusters, traders can identify distinct regions on the price chart where clusters are formed. Each cluster represents a group of similar price change patterns.
Cluster Analysis: Analyze the clusters to identify areas where clusters tend to form. These areas might correspond to regions of price behavior that repeat over time and could be indicative of support and resistance levels.
Potential Support and Resistance Levels: Based on the identified areas of cluster formation, traders can consider these regions as potential support and resistance levels. A cluster forming at a specific price level could suggest that this level has been historically significant, causing similar price behavior in the past.
Cluster Standard Deviation: In addition to looking at the means (centroids) of the clusters, traders can also calculate the standard deviation of price changes within each cluster. Standard deviation is a measure of the dispersion or volatility of data points around the mean. A higher standard deviation indicates greater price volatility within a cluster.
Low Standard Deviation: If a cluster has a low standard deviation, it suggests that prices within that cluster are relatively stable and less likely to exhibit sudden and large price movements. Traders might consider placing tighter stop-loss orders for trades within these clusters.
High Standard Deviation: Conversely, if a cluster has a high standard deviation, it indicates greater price volatility within that cluster. Traders might opt for wider stop-loss orders to allow for potential price fluctuations without getting stopped out prematurely.
Cluster Density: Each data point is assigned to a cluster so a cluster that is more dense will act more like gravity and
2) Traditional Approach:
Trendlines: Draw trendlines connecting significant highs or lows on a price chart to identify potential support and resistance levels.
Chart Patterns: Identify chart patterns like double tops, double bottoms, head and shoulders, and triangles that often indicate potential reversal points.
Moving Averages: Use moving averages to identify levels where the price might find support or resistance based on the average price over a specific period.
Psychological Levels: Identify round numbers or levels that traders often pay attention to, which can act as support and resistance.
Previous Highs and Lows: Identify significant previous price highs and lows that might act as support or resistance.
The key difference lies in the approach and the foundation of these methods. Traditional methods are based on well-established principles of technical analysis and market psychology, while the K-means approach involves clustering price behavior without necessarily incorporating market sentiment or specific price patterns.
It's important to note that while the K-means approach might provide an interesting way to analyze price data, it should be used cautiously and in conjunction with other traditional methods. Financial markets are influenced by a wide range of factors beyond just price behavior, and the effectiveness of any method for identifying support and resistance levels should be thoroughly tested and validated. Additionally, developments in trading strategies and analysis techniques could have occurred since my last update.
█ K MEANS ALGORITHM
The algorithm for K means is as follows:
Initialize cluster centers
assign data to clusters based on minimum distance
calculate cluster center by taking the average or median of the clusters
repeat steps 1-3 until cluster centers stop moving
█ LIMITATIONS OF K MEANS
There are 3 main limitations of this algorithm:
Sensitive to Initializations: K-means is sensitive to the initial placement of centroids. Different initializations can lead to different cluster assignments and final results.
Assumption of Equal Sizes and Variances: K-means assumes that clusters have roughly equal sizes and spherical shapes. This may not hold true for all types of data. It can struggle with identifying clusters with uneven densities, sizes, or shapes.
Impact of Outliers: K-means is sensitive to outliers, as a single outlier can significantly affect the position of cluster centroids. Outliers can lead to the creation of spurious clusters or distortion of the true cluster structure.
█ LIMITATIONS IN APPLICATION OF K MEANS IN TRADING
Trading data often exhibits characteristics that can pose challenges when applying indicators and analysis techniques. Here's how the limitations of outliers, varying scales, and unequal variance can impact the use of indicators in trading:
Outliers are data points that significantly deviate from the rest of the dataset. In trading, outliers can represent extreme price movements caused by rare events, news, or market anomalies. Outliers can have a significant impact on trading indicators and analyses:
Indicator Distortion: Outliers can skew the calculations of indicators, leading to misleading signals. For instance, a single extreme price spike could cause indicators like moving averages or RSI (Relative Strength Index) to give false signals.
Risk Management: Outliers can lead to overly aggressive trading decisions if not properly accounted for. Ignoring outliers might result in unexpected losses or missed opportunities to adjust trading strategies.
Different Scales: Trading data often includes multiple indicators with varying units and scales. For example, prices are typically in dollars, volume in units traded, and oscillators have their own scale. Mixing indicators with different scales can complicate analysis:
Normalization: Indicators on different scales need to be normalized or standardized to ensure they contribute equally to the analysis. Failure to do so can lead to one indicator dominating the analysis due to its larger magnitude.
Comparability: Without normalization, it's challenging to directly compare the significance of indicators. Some indicators might have a larger numerical range and could overshadow others.
Unequal Variance: Unequal variance in trading data refers to the fact that some indicators might exhibit higher volatility than others. This can impact the interpretation of signals and the performance of trading strategies:
Volatility Adjustment: When combining indicators with varying volatility, it's essential to adjust for their relative volatilities. Failure to do so might lead to overemphasizing or underestimating the importance of certain indicators in the trading strategy.
Risk Assessment: Unequal variance can impact risk assessment. Indicators with higher volatility might lead to riskier trading decisions if not properly taken into account.
█ APPLICATION OF THIS INDICATOR
This indicator can be used in 2 ways:
1) Make a directional trade:
If a trader thinks price will go higher or lower and price is within a cluster zone, The trader can take a position and place a stop on the 1 sd band around the cluster. As one can see below, the trader can go long the green arrow and place a stop on the one standard deviation mark for that cluster below it at the red arrow. using this we can calculate a risk to reward ratio.
Calculating risk to reward: targeting a risk reward ratio of 2:1, the trader could clearly make that given that the next resistance area above that in the orange cluster exceeds this risk reward ratio.
2) Take a reversal Trade:
We can use cluster centers (support and resistance levels) to go in the opposite direction that price is currently moving in hopes of price forming a pivot and reversing off this level.
Similar to the directional trade, we can use the standard deviation of the cluster to place a stop just in case we are wrong.
In this example below we can see that shorting on the red arrow and placing a stop at the one standard deviation above this cluster would give us a profitable trade with minimal risk.
Using the cluster density table in the upper right informs the trader just how dense the cluster is. Higher density clusters will give a higher likelihood of a pivot forming at these levels and price being rejected and switching direction with a larger move.
█ FEATURES & SETTINGS
General Settings:
Number of clusters: The user can select from 3 to five clusters. A good rule of thumb is that if you are trading intraday, less is more (Think 3 rather than 5). For daily 4 to 5 clusters is good.
Cluster Method: To get around the outlier limitation of k means clustering, The median was added. This gives the user the ability to choose either k means or k median clustering. K means is the preferred method if the user things there are no large outliers, and if there appears to be large outliers or it is assumed there are then K medians is preferred.
Bars back To train on: This will be the amount of bars to include in the clustering. This number is important so that the user includes bars that are recent but not so far back that they are out of the scope of where price can be. For example the last 2 years we have been in a range on the sp500 so 505 days in this setting would be more relevant than say looking back 5 years ago because price would have to move far to get there.
Show SD Bands: Select this to show the 1 standard deviation bands around the support and resistance level or unselect this to just show the support and resistance level by itself.
Features:
Besides the support and resistance levels and standard deviation bands, this indicator gives a table in the upper right hand corner to show the density of each cluster (support and resistance level) and is color coded to the cluster line on the chart. Higher density clusters mean price has been there previously more than lower density clusters and could mean a higher likelihood of a reversal when price reaches these areas.
█ WORKS CITED
Victor Sim, "Using K-means Clustering to Create Support and Resistance", 2020, towardsdatascience.com
Chris Piech, "K means", stanford.edu
█ ACKNOLWEDGMENTS
@jdehorty- Thanks for the publish template. It made organizing my thoughts and work alot easier.
ABC on Recursive Zigzag [Trendoscope]There are several implementations of ABC pattern in tradingview and pine script. However, we have made this indicator to provide users additional quantifiable information along with flexibility to experiment and develop their own strategy based on the patterns.
🎲 Highlights of this indicator over other ABC implementations are:
Implementation is based on recursive multi level zigzag allows bigger as well as smaller patterns to be identified
Allows users to set their trading rules with respect to entry, target and stop ratios, experiment and build their own strategy based on the ABC pattern.
Back test summary including win ratio and risk reward will help users understand the profitability based on different settings being used.
🎲 Concept of ABC Pattern
The ABC pattern, also known as the "Corrective Wave" or "Zigzag Pattern," is a fundamental concept in Elliott Wave Theory, which is widely used in technical analysis to identify and predict price movements in financial markets.
The ABC pattern is a three-wave corrective pattern that typically occurs within the context of a larger impulse or trending wave. It consists of two smaller waves in the opposite direction (A and C) separated by a corrective wave (B). These waves are labeled alphabetically and represent price movements.
Wave A (Impulse Wave): Wave A is the first leg of the ABC pattern and is characterized by a strong price move in the opposite direction of the prevailing trend. It is often driven by a fundamental or sentiment-driven event that temporarily disrupts the trend.
Wave B (Corrective Wave): Wave B is the corrective wave that follows Wave A. It represents a partial retracement of Wave A's price movement. Wave B can take various forms, such as a simple correction or a complex correction (e.g., a triangle or a flat correction). It typically doesn't retrace the entire length of Wave A.
Wave C (Impulse Wave): Wave C is the final leg of the ABC pattern and is characterized by a strong price move in the same direction as the prevailing trend. It often surpasses the starting point of Wave A and confirms the resumption of the larger trend.
🎲 Indicator Components
Upon loading the indicator on the chart, we can observe the following components on the chart.
Pattern Drawings is the graphical representation of present patterns. Please note that it is not necessary for patterns to be there on the chart all the time. Patterns will appear on the chart when price makes the patterns.
Trade Box is the box representing trade signals of the pattern. These trade levels are generated based on the user settings.
Summary Table is the back test summary containing details of historical pattern performance including Win Ratio and Risk Reward.
🎲 Indicator Settings
Details of each user settings are provided in the tooltips. Below is the snapshot of it.
🎲 Alerts
Basic level of alerts are built in the script using alert function to highlight the following conditions:
New ABC Pattern
Updates to existing Pattern
Both conditions will alert simple text messages. There is not much customization provided as part of this indicator. We will consider providing more options in future versions based on the interest and demand shown by users.
Signal AdapterThis Signal Adapter script can compose a signal based on inputs from other simple (non-signal) indicators and can forwards it to the "Template Trailing Strategy".
It allows the user to combine up to eight external inputs and define the conditions that will trigger the start, end, cancel start and cancel end deals.
A signal will be composed from those user-defined conditions. The "indicator on indicator" feature is needed so you can forward the resulted signal to the "Template Trailing Strategy".
Thus you should be Plus or Premium user to get it's full potential. It is very convenient for those who want to create a strategy without coding their own signal indicator and for those
who want to fast prototype various ideas based on simple conditions.
Multi-Asset Performance [Spaghetti] - By LeviathanThis indicator visualizes the cumulative percentage changes or returns of 30 symbols over a given period and offers a unique set of tools and data analytics for deeper insight into the performance of different assets.
Multi Asset Performance indicator (also called “Spaghetti”) makes it easy to monitor the changes in Price, Open Interest, and On Balance Volume across multiple assets simultaneously, distinguish assets that are overperforming or underperforming, observe the relative strength of different assets or currencies, use it as a tool for identifying mean reversion opportunities and even for constructing pairs trading strategies, detect "risk-on" or "risk-off" periods, evaluate statistical relationships between assets through metrics like correlation and beta, construct hedging strategies, trade rotations and much more.
Start by selecting a time period (e.g., 1 DAY) to set the interval for when data is reset. This will provide insight into how price, open interest, and on-balance volume change over your chosen period. In the settings, asset selection is fully customizable, allowing you to create three groups of up to 30 tickers each. These tickers can be displayed in a variety of styles and colors. Additional script settings offer a range of options, including smoothing values with a Simple Moving Average (SMA), highlighting the top or bottom performers, plotting the group mean, applying heatmap/gradient coloring, generating a table with calculations like beta, correlation, and RSI, creating a profile to show asset distribution around the mean, and much more.
One of the most important script tools is the screener table, which can display:
🔸 Percentage Change (Represents the return or the percentage increase or decrease in Price/OI/OBV over the current selected period)
🔸 Beta (Represents the sensitivity or responsiveness of asset's returns to the returns of a benchmark/mean. A beta of 1 means the asset moves in tandem with the market. A beta greater than 1 indicates the asset is more volatile than the market, while a beta less than 1 indicates the asset is less volatile. For example, a beta of 1.5 means the asset typically moves 150% as much as the benchmark. If the benchmark goes up 1%, the asset is expected to go up 1.5%, and vice versa.)
🔸 Correlation (Describes the strength and direction of a linear relationship between the asset and the mean. Correlation coefficients range from -1 to +1. A correlation of +1 means that two variables are perfectly positively correlated; as one goes up, the other will go up in exact proportion. A correlation of -1 means they are perfectly negatively correlated; as one goes up, the other will go down in exact proportion. A correlation of 0 means that there is no linear relationship between the variables. For example, a correlation of 0.5 between Asset A and Asset B would suggest that when Asset A moves, Asset B tends to move in the same direction, but not perfectly in tandem.)
🔸 RSI (Measures the speed and change of price movements and is used to identify overbought or oversold conditions of each asset. The RSI ranges from 0 to 100 and is typically used with a time period of 14. Generally, an RSI above 70 indicates that an asset may be overbought, while RSI below 30 signals that an asset may be oversold.)
⚙️ Settings Overview:
◽️ Period
Periodic inputs (e.g. daily, monthly, etc.) determine when the values are reset to zero and begin accumulating again until the period is over. This visualizes the net change in the data over each period. The input "Visible Range" is auto-adjustable as it starts the accumulation at the leftmost bar on your chart, displaying the net change in your chart's visible range. There's also the "Timestamp" option, which allows you to select a specific point in time from where the values are accumulated. The timestamp anchor can be dragged to a desired bar via Tradingview's interactive option. Timestamp is particularly useful when looking for outperformers/underperformers after a market-wide move. The input positioned next to the period selection determines the timeframe on which the data is based. It's best to leave it at default (Chart Timeframe) unless you want to check the higher timeframe structure of the data.
◽️ Data
The first input in this section determines the data that will be displayed. You can choose between Price, OI, and OBV. The second input lets you select which one out of the three asset groups should be displayed. The symbols in the asset group can be modified in the bottom section of the indicator settings.
◽️ Appearance
You can choose to plot the data in the form of lines, circles, areas, and columns. The colors can be selected by choosing one of the six pre-prepared color palettes.
◽️ Labeling
This input allows you to show/hide the labels and select their appearance and size. You can choose between Label (colored pointed label), Label and Line (colored pointed label with a line that connects it to the plot), or Text Label (colored text).
◽️ Smoothing
If selected, this option will smooth the values using a Simple Moving Average (SMA) with a custom length. This is used to reduce noise and improve the visibility of plotted data.
◽️ Highlight
If selected, this option will highlight the top and bottom N (custom number) plots, while shading the others. This makes the symbols with extreme values stand out from the rest.
◽️ Group Mean
This input allows you to select the data that will be considered as the group mean. You can choose between Group Average (the average value of all assets in the group) or First Ticker (the value of the ticker that is positioned first on the group's list). The mean is then used in calculations such as correlation (as the second variable) and beta (as a benchmark). You can also choose to plot the mean by clicking on the checkbox.
◽️ Profile
If selected, the script will generate a vertical volume profile-like display with 10 zones/nodes, visualizing the distribution of assets below and above the mean. This makes it easy to see how many or what percentage of assets are outperforming or underperforming the mean.
◽️ Gradient
If selected, this option will color the plots with a gradient based on the proximity of the value to the upper extreme, zero, and lower extreme.
◽️ Table
This section includes several settings for the table's appearance and the data displayed in it. The "Reference Length" input determines the number of bars back that are used for calculating correlation and beta, while "RSI Length" determines the length used for calculating the Relative Strength Index. You can choose the data that should be displayed in the table by using the checkboxes.
◽️ Asset Groups
This section allows you to modify the symbols that have been selected to be a part of the 3 asset groups. If you want to change a symbol, you can simply click on the field and type the ticker of another one. You can also show/hide a specific asset by using the checkbox next to the field.
SimilarityMeasuresLibrary "SimilarityMeasures"
Similarity measures are statistical methods used to quantify the distance between different data sets
or strings. There are various types of similarity measures, including those that compare:
- data points (SSD, Euclidean, Manhattan, Minkowski, Chebyshev, Correlation, Cosine, Camberra, MAE, MSE, Lorentzian, Intersection, Penrose Shape, Meehl),
- strings (Edit(Levenshtein), Lee, Hamming, Jaro),
- probability distributions (Mahalanobis, Fidelity, Bhattacharyya, Hellinger),
- sets (Kumar Hassebrook, Jaccard, Sorensen, Chi Square).
---
These measures are used in various fields such as data analysis, machine learning, and pattern recognition. They
help to compare and analyze similarities and differences between different data sets or strings, which
can be useful for making predictions, classifications, and decisions.
---
References:
en.wikipedia.org
cran.r-project.org
numerics.mathdotnet.com
github.com
github.com
github.com
Encyclopedia of Distances, doi.org
ssd(p, q)
Sum of squared difference for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Measure of distance that calculates the squared euclidean distance.
euclidean(p, q)
Euclidean distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Measure of distance that calculates the straight-line (or Euclidean).
manhattan(p, q)
Manhattan distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Measure of absolute differences between both points.
minkowski(p, q, p_value)
Minkowsky Distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
p_value (float) : `float` P value, default=1.0(1: manhatan, 2: euclidean), does not support chebychev.
Returns: Measure of similarity in the normed vector space.
chebyshev(p, q)
Chebyshev distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Measure of maximum absolute difference.
correlation(p, q)
Correlation distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Measure of maximum absolute difference.
cosine(p, q)
Cosine distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Cosine distance between vectors `p` and `q`.
---
angiogenesis.dkfz.de
camberra(p, q)
Camberra distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Weighted measure of absolute differences between both points.
mae(p, q)
Mean absolute error is a normalized version of the sum of absolute difference (manhattan).
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Mean absolute error of vectors `p` and `q`.
mse(p, q)
Mean squared error is a normalized version of the sum of squared difference.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Mean squared error of vectors `p` and `q`.
lorentzian(p, q)
Lorentzian distance between provided vectors.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Lorentzian distance of vectors `p` and `q`.
---
angiogenesis.dkfz.de
intersection(p, q)
Intersection distance between provided vectors.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Intersection distance of vectors `p` and `q`.
---
angiogenesis.dkfz.de
penrose(p, q)
Penrose Shape distance between provided vectors.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Penrose shape distance of vectors `p` and `q`.
---
angiogenesis.dkfz.de
meehl(p, q)
Meehl distance between provided vectors.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Meehl distance of vectors `p` and `q`.
---
angiogenesis.dkfz.de
edit(x, y)
Edit (aka Levenshtein) distance for indexed strings.
Parameters:
x (int ) : `array` Indexed array.
y (int ) : `array` Indexed array.
Returns: Number of deletions, insertions, or substitutions required to transform source string into target string.
---
generated description:
The Edit distance is a measure of similarity used to compare two strings. It is defined as the minimum number of
operations (insertions, deletions, or substitutions) required to transform one string into another. The operations
are performed on the characters of the strings, and the cost of each operation depends on the specific algorithm
used.
The Edit distance is widely used in various applications such as spell checking, text similarity, and machine
translation. It can also be used for other purposes like finding the closest match between two strings or
identifying the common prefixes or suffixes between them.
---
github.com
www.red-gate.com
planetcalc.com
lee(x, y, dsize)
Distance between two indexed strings of equal length.
Parameters:
x (int ) : `array` Indexed array.
y (int ) : `array` Indexed array.
dsize (int) : `int` Dictionary size.
Returns: Distance between two strings by accounting for dictionary size.
---
www.johndcook.com
hamming(x, y)
Distance between two indexed strings of equal length.
Parameters:
x (int ) : `array` Indexed array.
y (int ) : `array` Indexed array.
Returns: Length of different components on both sequences.
---
en.wikipedia.org
jaro(x, y)
Distance between two indexed strings.
Parameters:
x (int ) : `array` Indexed array.
y (int ) : `array` Indexed array.
Returns: Measure of two strings' similarity: the higher the value, the more similar the strings are.
The score is normalized such that `0` equates to no similarities and `1` is an exact match.
---
rosettacode.org
mahalanobis(p, q, VI)
Mahalanobis distance between two vectors with population inverse covariance matrix.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
VI (matrix) : `matrix` Inverse of the covariance matrix.
Returns: The mahalanobis distance between vectors `p` and `q`.
---
people.revoledu.com
stat.ethz.ch
docs.scipy.org
fidelity(p, q)
Fidelity distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Bhattacharyya Coefficient between vectors `p` and `q`.
---
en.wikipedia.org
bhattacharyya(p, q)
Bhattacharyya distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Bhattacharyya distance between vectors `p` and `q`.
---
en.wikipedia.org
hellinger(p, q)
Hellinger distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The hellinger distance between vectors `p` and `q`.
---
en.wikipedia.org
jamesmccaffrey.wordpress.com
kumar_hassebrook(p, q)
Kumar Hassebrook distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Kumar Hassebrook distance between vectors `p` and `q`.
---
github.com
jaccard(p, q)
Jaccard distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Jaccard distance between vectors `p` and `q`.
---
github.com
sorensen(p, q)
Sorensen distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Sorensen distance between vectors `p` and `q`.
---
people.revoledu.com
chi_square(p, q, eps)
Chi Square distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
eps (float)
Returns: The Chi Square distance between vectors `p` and `q`.
---
uw.pressbooks.pub
stats.stackexchange.com
www.itl.nist.gov
kulczynsky(p, q, eps)
Kulczynsky distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
eps (float)
Returns: The Kulczynsky distance between vectors `p` and `q`.
---
github.com
All Candlestick Patterns on Backtest [By MUQWISHI]▋ INTRODUCTION :
The “All Candlestick Patterns on Backtest” indicator generates a table that offers a clear visualization of the historical return percentages for each candlestick pattern strategy over a specified time period. This table serves as an organized resource, serving as a launching point for in-depth research into candle formations. It may help to rectify any misconceptions surrounding candlestick patterns, refine trading approaches, and it could be foundation to make informed decisions in trading journey.
_______________________
▋ OVERVIEW:
_______________________
▋ CREDIT:
Credit to public technical “*All Candlestick Patterns*” indicator.
_______________________
▋ TABLE:
_______________________
▋ CHART:
_______________________
▋ INDICATOR SETTINGS:
#Section One: Table Setting
#Section Two: Backtest Setting
(1) Backtest Starting Period.
Note: If the datetime of the first candle on the chart is after the entreated datetime, the calculation will start from the first candle on the chart.
(2) Initial Equity ($).
(3) Leverage: Current Equity x Leverage Value.
(4) Entry Mode:
- “At Close”: Execute entry order as soon as the candle confirmed.
- “Breakout High (Low for Short)”: Stop limit buy order, entry order will be executed as soon as the next candle breakout the high of last pattern’s candle (low for short)
(5) Cancel Entry Within Bars: This option is applicable with {Entry Mode = Breakout High (Low for Short)}, to cancel the Entry Order if it's not executed within certain selected number of bars.
(6) Stoploss Range: the range refers to high of pattern - low of pattern.
(7) Risk:Reward: the calculation of risk:reward range start from entry price level. For example: A pattern triggered with range 10 points, and entry price is 100.
- For 1:1~risk:reward would the stoploss at 90 and takeprofit at 110.
- For 1:3~risk:reward would the stoploss at 90 and takeprofit at 130.
#Section Three: Technical & Candle Patterns
_______________________
▋ Comments:
This table was developed for research and educational purposes.
Candlestick patterns are almost similar as seen in “*All Candlestick Patterns*” indicator.
The table results should not be taken as a major concept to build a trading decision.
Personally, I see candlestick patterns as a means to comprehend the psychology of the market, and help to follow the price action.
Please let me know if you have any questions.
Thank you.
CE - 42MACRO Equity Factor Table This is Part 1 of 2 from the 42MACRO Recreation Series
The CE - 42MACRO Equity Factor Table is a whole toolbox packaged in a single indicator.
It aims to provide a probabilistic insight into the market realized GRID Macro Regime, use a multiplex of important Assets and Indices to form a high probability Implied Correlation expectation and allows to derive extra market insights by showing the most important aggregates and their performance over multiple timeframes... and what that might mean for the whole market direction, as well as the underlying asset.
WARNING
By the nature of the macro regimes, the outcomes are more accurate over longer Chart Timeframes (Week to Months).
However, it is also a valuable tool to form a proper,
market realized, short to medium term bias.
NOTE
This Indicator is intended to be used alongside the 2nd part "CE - 42MACRO Yield and Macro"
for a more wholistic approach and higher accuracy.
Due to coding limitations they can not be merged into one Indicator.
Methodology:
The Equity Factor Table tracks specifically chosen Assets to identify their performance and add the combined performances together to visualize 42MACRO's GRID Equity Model.
For this it uses the below Assets, with more to come:
Dividend Compounders ( AMEX:SPHD )
Mid Caps ( AMEX:VO )
Emerging Markets ( AMEX:EEM )
Small Caps ( AMEX:IWM )
Mega Cap Growth ( NASDAQ:QQQ )
Brazil ( AMEX:EWZ )
United Kingdom ( AMEX:EWU )
Growth ( AMEX:IWF )
United States ( AMEX:SPY )
Japan ( AMEX:DXJ )
Momentum ( AMEX:MTUM )
China ( AMEX:FXI )
Low Beta ( AMEX:SPLV )
International ex-US ( NASDAQ:ACWX )
India ( AMEX:INDA )
Eurozone ( AMEX:EZU )
Quality ( AMEX:QUAL )
Size ( AMEX:OEF )
Functionalities:
1. Correlations
Takes a measure of Cross Market Correlations
2. Implied Trend
Calculates the trend for each Asset and uses the Correlation to obtain the Implied Trend for the underlying Asset
There are multiple functionalities to enhance Signal Speed and precision...
Reading a signal only over a certain threshold, otherwise being colored in gray to signal noise or unclear market behavior
Normalization of Signal
Double Normalization of Signal for more Speed... ideal for the Crypto Market
Using an additional Hull Moving Average to enhance Signal Speed
Additional simple Background coloring to get a Signal from the HMA
Barcoloring based on the Implied Correlation
3. Equity Factor Table
Shows market realized Asset performance
Provides the approximate realized GRID market regimes
Informs about "Risk ON" and "Risk OFF" market states
Now into the juicy stuff...
Visuals:
There is a variety of options to change visual settings of what is plotted and where
+ additional considerations.
Everything that is relevant in the underlying logic which can improve comprehension can be visualized with these options.
More to come
Market Correlation:
The Market Correlation Table takes the Correlation of all the Assets to the Asset on the Chart,
it furthermore uses the Normalized KAMA Oscillator by IkkeOmar to analyse the current trend of every single Asset.
(To enhance the Signal you can apply the mentioned Indicator on the relevant Assets to find your target Asset movements that you intend to capture...
and then change the length of the Indicator in here)
It then Implies a Correlation based on the Trend and the Correlation to give a probabilistically adjusted expectation for the future Chart Asset Movement.
This is strengthened by taking the average of all Implied Trends.
Thus the Correlation Table provides valuable insights about probabilistically likely Movement of the Asset over the defined time duration,
providing alpha for Traders and Investors alike.
Equity Factors:
The table provides valuable information about the current market environment (whether it's risk on or risk off),
the rough GRID models from 42MACRO and the actual market performance.
This allows you to obtain a deeper understanding of how the market works and makes it simple to identify the actual market direction,
makes it possible to derive overall market Health and shows market strength or weakness.
Utility:
The Equity Factor Table is divided in 4 Sections which are the GRID regimes:
Economic Growth:
Goldilocks
Reflation
Economic Contraction:
Inflation
Deflation
Top 5 Equity Factors:
Are the values green for a specific Column?
If so then the market reflects the corresponding GRID behavior.
Bottom 5 Equity Factors:
Are the values red for a specific Column?
If so then the market reflects the corresponding GRID behavior.
So if we have Goldilocks as current regime we would see green values in the Top 5 Goldilocks Cells and red values in the Bottom 5 Goldilocks Cells.
You will find that Reflation will look similar, as it is also a sign of Economic Growth.
Same is the case for the two Contraction regimes.
This whole Indicator, as well as the second part, is based to a majority on 42MACRO's models.
I only brought them into TV and added things on top of it.
If you have questions or need a more in-depth guide DM me.
Will make a guide to all functionalities if necessity becomes apparent.
GM
[SS] Linear ModelerHello everyone,
This is the linear modeler indicator.
It is a statistical based indicator that provides a likely price target and range based on a linear regression time series analysis.
To represent it visually, all the indicator does is it represents a linear regression channel and actually plots out the range at various points based on the current trend (see the chart below):
The indicator will perform the same assessment, but give you a working range and timeline for targets.
As well, the indicator will back-test the range and variables to see how it is performing and how reliable the results are likely to be.
General Functions:
In the chart above you can see all the various parameters and functions.
The indicator will display the most likely target (MLT) to be expected within the next pre-determined timeframe (by candles).
So for the first target, the indicator is saying within the next 10 candles, BA's MLT is 221.46 and based on BT results the reliability of this assessment is around 46%.
The indicator will also display the anticipated range at each designated timeframe.
In the chart above, we can see that at 20 candles, the likely range that BA should be trading in is 204 and 238 with a reliability of around 62% based on previous performance.
Plot Functions:
As this is performing a linear time series projection, you can have the indicator plot the projected ranges. Simply go to the settings menu and select the desired forecast length:
This will plot out the desired range and result over the specified time period. Here is an example of BA plotted over the next 50 candles on the hourly:
You can technically use this as an SMA/EMA type indicator, just keep in mind it may be a bit slower than a traditional EMA and SMA indicator, as it is processing a lot of data and plotting out forecasted data as opposed to an SMA or EMA.
If you wish to use it as an EMA or SMA, you can unselect the "Display Chart" Function to hide the table, and you can also select the "Plot Label" function. This will display the current projection analytics directly on your plotted line so you don't need to reference the table at all:
Tips on use:
I use this on the larger and smaller timeframes. On all timeframes, I will look to targets that display 90% to 100% in the BT results.
Bear in mind, this does not mean that we will 100% of the time hit this target, these targets can fail, it just means that there is a higher confidence of hitting this target than other, less reliable targets.
I will plot these targets out if they fall within the implied range of the timeframe I am looking at and will act on them according to the price action.
This is a great indicator to use in combination with other range based indicators. If you use the implied range from options to help guide your trading, you can see which targets are likely to be hit based on the current trend that fall within that implied range.
You can also assess the strength of the trends at various points in time and have an actionable range with a reliability reading at various points in time.
That is pretty much the bulk of the indicator.
Hopefully you find it helpful and useful.
As always, leave your questions and suggestions below.
Thanks for reading and checking it out!
SuperTrend AI (Clustering) [LuxAlgo]The SuperTrend AI indicator is a novel take on bridging the gap between the K-means clustering machine learning method & technical indicators. In this case, we apply K-Means clustering to the famous SuperTrend indicator.
🔶 USAGE
Users can interpret the SuperTrend AI trailing stop similarly to the regular SuperTrend indicator. Using higher minimum/maximum factors will return longer-term signals.
The displayed performance metrics displayed on each signal allow for a deeper interpretation of the indicator. Whereas higher values could indicate a higher potential for the market to be heading in the direction of the trend when compared to signals with lower values such as 1 or 0 potentially indicating retracements.
In the image above, we can notice more clear examples of the performance metrics on signals indicating trends, however, these performance metrics cannot perform or predict every signal reliably.
We can see in the image above that the trailing stop and its adaptive moving average can also act as support & resistance. Using higher values of the performance memory setting allows users to obtain a longer-term adaptive moving average of the returned trailing stop.
🔶 DETAILS
🔹 K-Means Clustering
When observing data points within a specific space, we can sometimes observe that some are closer to each other, forming groups, or "Clusters". At first sight, identifying those clusters and finding their associated data points can seem easy but doing so mathematically can be more challenging. This is where cluster analysis comes into play, where we seek to group data points into various clusters such that data points within one cluster are closer to each other. This is a common branch of AI/machine learning.
Various methods exist to find clusters within data, with the one used in this script being K-Means Clustering , a simple iterative unsupervised clustering method that finds a user-set amount of clusters.
A naive form of the K-Means algorithm would perform the following steps in order to find K clusters:
(1) Determine the amount (K) of clusters to detect.
(2) Initiate our K centroids (cluster centers) with random values.
(3) Loop over the data points, and determine which is the closest centroid from each data point, then associate that data point with the centroid.
(4) Update centroids by taking the average of the data points associated with a specific centroid.
Repeat steps 3 to 4 until convergence, that is until the centroids no longer change.
To explain how K-Means works graphically let's take the example of a one-dimensional dataset (which is the dimension used in our script) with two apparent clusters:
This is of course a simple scenario, as K will generally be higher, as well the amount of data points. Do note that this method can be very sensitive to the initialization of the centroids, this is why it is generally run multiple times, keeping the run returning the best centroids.
🔹 Adaptive SuperTrend Factor Using K-Means
The proposed indicator rationale is based on the following hypothesis:
Given multiple instances of an indicator using different settings, the optimal setting choice at time t is given by the best-performing instance with setting s(t) .
Performing the calculation of the indicator using the best setting at time t would return an indicator whose characteristics adapt based on its performance. However, what if the setting of the best-performing instance and second best-performing instance of the indicator have a high degree of disparity without a high difference in performance?
Even though this specific case is rare its however not uncommon to see that performance can be similar for a group of specific settings (this could be observed in a parameter optimization heatmap), then filtering out desirable settings to only use the best-performing one can seem too strict. We can as such reformulate our first hypothesis:
Given multiple instances of an indicator using different settings, an optimal setting choice at time t is given by the average of the best-performing instances with settings s(t) .
Finding this group of best-performing instances could be done using the previously described K-Means clustering method, assuming three groups of interest (K = 3) defined as worst performing, average performing, and best performing.
We first obtain an analog of performance P(t, factor) described as:
P(t, factor) = P(t-1, factor) + α * (∆C(t) × S(t-1, factor) - P(t-1, factor))
where 1 > α > 0, which is the performance memory determining the degree to which older inputs affect the current output. C(t) is the closing price, and S(t, factor) is the SuperTrend signal generating function with multiplicative factor factor .
We run this performance function for multiple factor settings and perform K-Means clustering on the multiple obtained performances to obtain the best-performing cluster. We initiate our centroids using quartiles of the obtained performances for faster centroids convergence.
The average of the factors associated with the best-performing cluster is then used to obtain the final factor setting, which is used to compute the final SuperTrend output.
Do note that we give the liberty for the user to get the final factor from the best, average, or worst cluster for experimental purposes.
🔶 SETTINGS
ATR Length: ATR period used for the calculation of the SuperTrends.
Factor Range: Determine the minimum and maximum factor values for the calculation of the SuperTrends.
Step: Increments of the factor range.
Performance Memory: Determine the degree to which older inputs affect the current output, with higher values returning longer-term performance measurements.
From Cluster: Determine which cluster is used to obtain the final factor.
🔹 Optimization
This group of settings affects the runtime performances of the script.
Maximum Iteration Steps: Maximum number of iterations allowed for finding centroids. Excessively low values can return a better script load time but poor clustering.
Historical Bars Calculation: Calculation window of the script (in bars).
Market Sessions and TPO (+Forecast)This indicator "Market Sessions and TPO (+Forecast)" shows various market sessions alongside a TPO profile (presented as the traditional lettering system or as bars) and price forecast for the duration of the session.
Additionally, numerous statistics for the session are shown.
Features
Session open and close times presented in boxes
Session pre market and post market shown
TPO profile generated for each session (normal market hours only)
A forecast for the remained of the session is projected forward
Forecast can be augmented by ATR
Naked POCs remain on the chart until violated
Volume delta for the session shown
OI Change for the session shown (Binance sourced)
Total volume for the session shown
Price range for the session shown
The image above shows processes of the indicator.
Volume delta, OI change, total volume and session range are calculated and presented for each session.
Additionally, a TPO profile for the most recent session is shown, and a forecast for the remainder of the active session is shown.
The image above shows an alternative display method for the session forecast and TPO profile!
Additionally, the pre-market and post-market times are denoted by dashed boxes.
The image above exemplifies additional capabilities.
That's all for now; further updates to come and thank you for checking this out!
And a special thank you to @TradingView of course, for making all of this possible!
imlibLibrary "imlib"
Description
The library allows you to display images in your scripts utilising the objects. You can change the image size and screen aspect ratio (the ratio of width to height which you can change if the image is too wide / tall). The library has "example()" function which you can use to see how it works. It also has a handy "logo()" function which you can use to quickly display an image by passing the "Image data string", table position, image size and aspect ratio. And of course you can use it in your own custom way by taking the "logo()" function as an example and modifying the code to your needs.
Since tables in Pinescript are limited to 100 by 100 cells, the limit for image's size is also 100x100 px. All the necessary data to display an image is passed as a string variable, and since Pinescript has a limit of 4096 characters for variables of type, that string can have a maximum length of 4096 characters, which is enough to display a 64x64px image (but can be enough to display a 100x100 image, depending on the image itself).
Below you can find the definitions of functions for this library.
_decompress(data)
: Decompresses string with data image
Parameters:
data (string)
Returns: : Array of with decompressed data
load(data)
: Splits the string with image data into components and builds an object
Parameters:
data (string)
Returns: : An object
show(imgdata, table_id, image_size, screen_ratio)
: Displays an image in a table
Parameters:
imgdata (ImgData)
table_id (table)
image_size (float)
screen_ratio (string)
Returns: : nothing
example()
: Use it as an example of how this library works and how to use it in your own scripts
Returns: : nothing
logo(imgdata, position, image_size, screen_ratio)
: Displays logo using image data string
Parameters:
imgdata (string)
position (string)
image_size (float)
screen_ratio (string)
Returns: : nothing
ImgData
Fields:
w (series__integer)
h (series__integer)
s (series__string)
pal (series__string)
data (array__string)
Volume Delta Compare [Ticks ~ LTF data]
The "Volume Delta Compare " publication shows 2 different techniques to show into-depth details of Volume, using Tick and Lower-Time-Frame (LTF) data.
🔶 USAGE
Check for divergences between price and volume movement
Check details (why and when a ΔV developed)
Or if you want to see a lot of data stacked on each other )
🔶 CONCEPTS
🔹 Tick vs. LTF data
a Tick is an measure of (upward or downward) movement in price OR volume.
We can use this data by using varip in the code.
Advantage:
• Detail, detail, detail
• Accurate, per tick
Disadvantage:
• Only realtime
• Can reset 'easily' -> loss of data
• Will reset when settings are changed
LTF data, through the request.security_lower_tf() function, measures the OHLCV data per LTF bar
Advantage:
• Access to history when loading a chart
• No 'loss' of data when chart resets
Disadvantage:
• Less detailed
• Less accurate
This script makes it possible to compare the 2 techniques and enables you to show different values.
🔹 Values
There are mainly 3 important values:
• UP volume (uV): volume when price rises
• DOWN volume (dV): volume when price falls
• NEUTRAL volume (nV): volume when price stays the same
From this, additional data is calculated:
• Volume Delta (ΔV): uV minus dV
• Cumulative Delta Volume (cΔV): sum of ΔV
One typical nV is at open: at that moment there isn't a base price to compare with,
so when the first trade doesn't fully fill the first supply (up or down), volume will rise, but price just is 'open', no movement -> no uV or dV.
• Tick data: every volume changement per tick will be added to the concerning variable (uV, dV or nV)
• LTF data: every volume changement of each bar will be added to the concerning variable (uV, dV or nV)
-> this can easily give a difference, for example (Tick vs. 1 minute LTF), when most of the ticks caused a rise of price, but at the last few seconds, a few ticks causes the close to come below open, with Tick data this could give more UP Volume, while LTF data will show 1 value of DOWN Volume.
🔶 EXAMPLES
🔹 Details
In these examples you can see:
• grey line: Total volume (higher precision)
• UP/DOWN/NEUTRAL Volume
• green columns: uV
• orange columns: dV
• blue pillars: nV
• coloured stepline: reflects ΔV
• close > open and positive ΔV -> green
• close > open but negative ΔV -> fuchsia
• close < open and negative ΔV -> orange
• close < open but positive ΔV -> bright lime green
• Right side -> indication of used data (Tick/LTF data) + last ΔV
• labels (can be disabled)
Above 0 (only with Tick data): data from EVERY tick (ΔV ):
• first the amount of Volume (0 when the amount is very minimal)
• between brackets: price movement
Below 0:
• Σ V: sum of uV, dV and nV, for that bar
• Σ up: sum of uV for that bar
• Σ dn: sum of dV for that bar
• Σ nt: sum of nV for that bar
• Σ P: sum of price movement, for that bar (only at Tick data)
(At the right you'll see a new bar just started)
Here is a detail of the first second at opening:
🔹 Cumulative Volume Delta (CVD)
Difference CVD based on Tick vs. LTF data :
(horizontal lines added for reference)
🔶 FEATURES
🔹 Minimal plotting of na values
Data window and status line only show what is applicable (tick or LTF data) to diminish clutter of data values:
The Tick option has a label above 0 which includes details of every Tick.
If data is added every tick, that label on a 10 minute chart will be filled beyond limitations pretty quickly (string max_length = 4096 limit).
To prevent the script stopping to execute, at a certain limit, this label will stop updating and show the message "Too much data".
The label below the 0-line won't reach that limit, so it will keep on updating.
Timeframes closer to 1 second will have less risk to reach that 4096 limit. Details will remain to show in this case.
🔹 Automatic label colour adaption when changing between dark/light mode values
Label background/text-colour will adapt according to the dark/light-mode by using chart.fg_color / chart.bg_color
🔶 SETTINGS
🔹 Data from: Ticks vs. LTF data
🔹 LTF: Lower Time-Frame for when LTF option is chosen: 1, 5, 10, 15, 30 Seconds or 1 minute
🔹 Also start when bar already has data: only for tick data -> when disabled calculations only start on a new bar.
🔹 CVD, Only show Cumulative Delta Volume: enable to just display CVD
🔹 Colours: colour at the right is for price/volume direction divergences
🔹 Label: choose what you want to display + size labels
🔹 0-line: The label under the 0-line sometimes goes below the chart. this can be adjusted with this setting.
TradingToolsLibraryLibrary "TradingToolsLibrary"
Easily create advanced entries, exits, filters and qualifiers to simulate strategies. Supports DCA (Dollar Cost Averaging) Lines, Stop Losses, Take Profits (with trailing or without) & ATR.
method deepCopy(this)
This creates a deep copy instead of a shallow copy of an entry_position. This does NOT deep copy the self_pyramiding_positions array reference, since only the master entry_position needs this to track the rest of its copies for efficiency reasons. This is to prevent a feedback loop.
Namespace types: entry_position
Parameters:
this (entry_position)
Returns: entry_position
method precision_fix(this, precision)
Convert a floating point number to a precise floating point number with digit precision to avoid floating point errors in quantity calculations.
Namespace types: series float, simple float, input float, const float
Parameters:
this (float)
precision (int)
Returns: float
xSellBuyMidInterpolation(_x, _high, _low, _sellRange, _buyRange)
Creates an interpolation for a sell range and buy range but with an emphasis on reaching the _low the closer to the middle of the _sell and _buy range you go.
Parameters:
_x (float) : is the value you want to use to control interpolation bewteen the _high and _low value. This will return the lowest percentage at the mid between high and low and highest percentage at the _high and _low.
_high (float)
_low (float)
_sellRange (float)
_buyRange (float)
Returns: an interpolated float between the _high and _low supplied.
xSellBuyInterpolation(_x, _high, _low, _sellRange, _buyRange)
Creates an interpolation a sell range and buy range
Parameters:
_x (float) : is the value you want to use to control interpolation bewteen the _high and _low value.
_high (float)
_low (float)
_sellRange (float)
_buyRange (float)
Returns: an interpolated float between the _high and _low supplied.
activate_entries_and_exits(_entries, _exits, _filters, _qualifiers, _equity)
Determines activation for entries or exits. Does not place the actual orders.
Parameters:
_entries (entry_position )
_exits (exit_position )
_filters (filter )
_qualifiers (qualifier )
_equity (equity_management)
Returns: void
create_entries_and_exits(_entries, _exits, _equity)
Creates actual entry and exit orders if activated
Parameters:
_entries (entry_position )
_exits (exit_position )
_equity (equity_management)
Returns: void
filter
Fields:
disabled (series__bool)
filter_for_entries_or_exits (series__string)
filter_for_groups (series__string)
condition (series__bool)
dynamic_condition (series__bool)
use_dynamic_condition (series__bool)
use_override_default_condition (series__bool)
dynamic_condition_operator (series__string)
dynamic_condition_source (series__float)
dynamic_compare_source (series__float)
dynamic_condition_source_prior (series__float)
dynamic_compare_source_prior (series__float)
use_dynamic_compare_source (series__bool)
dynamic_condition_activate_value (series__string)
expire_condition_activate_value (series__string)
expire_condition_source (series__float)
expire_condition_source_prior (series__float)
expire_compare_source (series__float)
expire_compare_source_prior (series__float)
use_expire_compare_source (series__bool)
expire_condition_operator (series__string)
qualifier
Fields:
disabled (series__bool)
qualify_for_entries_or_exits (series__string)
qualify_for_groups (series__string)
disqualify (series__bool)
condition (series__bool)
dynamic_condition (series__bool)
use_dynamic_condition (series__bool)
use_override_default_condition (series__bool)
dynamic_condition_operator (series__string)
dynamic_condition_source (series__float)
dynamic_compare_source (series__float)
dynamic_condition_source_prior (series__float)
dynamic_compare_source_prior (series__float)
use_dynamic_compare_source (series__bool)
dynamic_condition_activate_value (series__string)
expire_after_x_bars (series__integer)
use_expire_after_x_bars (series__bool)
use_expire_condition (series__bool)
use_override_expire_condition (series__bool)
expire_condition_operator (series__string)
expire_condition_source (series__float)
expire_compare_source (series__float)
expire_condition_source_prior (series__float)
expire_compare_source_prior (series__float)
use_expire_compare_source (series__bool)
expire_condition_activate_value (series__string)
active (series__bool)
expire_after_bars_bar_index (series__integer)
expire_after_bars_bar_index_prior (series__integer)
expire_bar_count (series__integer)
expire_bar_changed (series__bool)
entry_position
Fields:
disabled (series__bool)
activate (series__bool)
active (series__bool)
override_occured (series__bool)
passDebug (array__bool)
initial_activation_price (series__float)
dca_done (series__bool)
condition (series__bool)
dynamic_condition (series__bool)
use_dynamic_condition (series__bool)
use_override_default_condition (series__bool)
dynamic_condition_operator (series__string)
dynamic_condition_source (series__float)
dynamic_compare_source (series__float)
dynamic_condition_source_prior (series__float)
dynamic_compare_source_prior (series__float)
use_dynamic_compare_source (series__bool)
dynamic_condition_activate_value (series__string)
use_cash (series__bool)
use_percent_equity (series__bool)
percent_equity_amount (series__float)
cash_amount (series__float)
position_size (series__float)
total_position_size (series__float)
prior_total_position_size (series__float)
equity_remaining (series__float)
prior_equity_remaining (series__float)
initial_equity (series__float)
use_martingale (series__bool)
martingale_win_ratio (series__float)
martingale_lose_ratio (series__float)
martingale_win_limit (series__integer)
martingale_lose_limit (series__integer)
martingale_limit_reset_mode (series__string)
use_dynamic_percent_equity (series__bool)
dynamic_percent_equity_amount (series__float)
initial_dynamic_percent_equity_amount (series__float)
dynamic_percent_equity_source (series__float)
dynamic_percent_equity_min (series__float)
dynamic_percent_equity_max (series__float)
dynamic_percent_equity_source_sell_range (series__float)
dynamic_percent_equity_source_buy_range (series__float)
dynamic_equity_interpolation_method (series__string)
total_bars (series__integer)
bar_index_at_activate (series__integer)
bars_since_active (series__integer)
time_at_activate (series__integer)
time_since_active (series__integer)
bar_index_at_activated (series__integer)
bar_index_at_pyramid_change (series__integer)
name (series__string)
id (series__string)
group (series__string)
pyramiding_limit (series__integer)
self_pyramiding_limit (series__integer)
self_pyramiding_positions (array__|entry_position|#OBJ)
new_pyramid_cancels_dca (series__bool)
num_active_long_positions (series__integer)
num_active_short_positions (series__integer)
num_active_positions (series__integer)
position_remaining (series__float)
prior_position_remaining (series__float)
direction (series__string)
allow_flip_position (series__bool)
flip_occurred (series__bool)
ignore_flip (series__bool)
use_dca (series__bool)
dca_use_limit (series__bool)
dca_num_positions (series__integer)
dca_positions (array__float)
dca_deviation_percentage (series__float)
dca_scale (series__float)
dca_percentages (series__string)
dca_close_cancels (series__bool)
dca_active_positions (series__integer)
use_atr_deviation (series__bool)
dca_atr_length (series__integer)
dca_atr_mult (series__float)
dca_atr_updates_dca_positions (series__bool)
close_price_at_order (series__float)
dca_use_deviation_atr_min (series__bool)
dca_position_quantities (array__float)
use_dca_dynamic_percent_equity (series__bool)
dca_in_use (array__bool)
dca_activated (array__bool)
dca_money_used (array__float)
dca_lines (array__line)
dca_color (series__color)
show_dca_lines (series__bool)
atr_value (series__float)
atr_value_at_activation (series__float)
use_cooldown_bars (series__bool)
cooldown_bars (series__integer)
cooldown_bar_changed (series__bool)
cooldown_bar_index (series__integer)
cooldown_bar_index_prior (series__integer)
cooldown_bar_change_count (series__integer)
expire_condition_activate_value (series__string)
expire_condition_source (series__float)
expire_condition_source_prior (series__float)
expire_compare_source (series__float)
expire_compare_source_prior (series__float)
use_expire_compare_source (series__bool)
expire_condition_operator (series__string)
exit_position
Fields:
disabled (series__bool)
id (series__string)
group (series__string)
exit_for_entries (series__string)
exit_for_groups (series__string)
total_bars (series__integer)
name (series__string)
condition (series__bool)
dynamic_condition (series__bool)
use_dynamic_condition (series__bool)
use_override_default_condition (series__bool)
dynamic_condition_operator (series__string)
dynamic_condition_source (series__float)
dynamic_compare_source (series__float)
dynamic_condition_source_prior (series__float)
dynamic_compare_source_prior (series__float)
use_dynamic_compare_source (series__bool)
dynamic_condition_activate_value (series__string)
activate (series__bool)
active (series__bool)
reset_equity (series__bool)
use_limit (series__bool)
use_alerts (series__bool)
reset_entry_cooldowns (series__bool)
prevent_new_entries_on_partial_close (series__bool)
show_activation_zone (series__bool)
use_average_position (series__bool)
source_value (series__float)
trigger_x_times (series__integer)
amount_of_times_triggered (series__integer)
quantity_percent (series__float)
trade_qty (series__float)
exit_amount (series__float)
entries_exiting_for (array__|entry_position|#OBJ)
atr_value (series__float)
update_atr (series__bool)
use_activate_after_bars (series__bool)
show_activate_after_bars (series__bool)
activate_after_bars (series__integer)
activate_after_bars_bar_changed (series__bool)
activate_after_bars_bar_index (series__integer)
activate_after_bars_bar_index_prior (series__integer)
activate_after_bars_bar_change_count (series__integer)
all_conditions_pass (series__bool)
use_close_if_profit_only (series__bool)
profit_value (series__float)
exit_type (series__string)
exit_modifier (series__string)
update_atr_with_new_pyramid (series__bool)
percentage (series__float)
activation_percentage (series__float)
atr_multiplier (series__float)
use_cancel_if_percent (series__bool)
cancel_if_percent (series__float)
activation_value (series__float)
activation_value_crossed (series__bool)
exit_value (series__float)
hypo_long_exit_value (series__float)
hypo_short_exit_value (series__float)
close_exit_value (series__float)
debug (series__float)
expire_condition_activate_value (series__string)
expire_condition_source (series__float)
expire_condition_source_prior (series__float)
expire_compare_source (series__float)
expire_compare_source_prior (series__float)
use_expire_compare_source (series__bool)
expire_condition_operator (series__string)
equity_management
Fields:
equity (series__float)
prior_equity (series__float)
position_used (series__float)
prior_position_used (series__float)
prevent_future_entries (series__bool)
minimum_order_size (series__float)
decimal_rounding_precision (series__integer)
direction (series__string)
show_order_info_in_comments (series__bool)
show_order_info_in_labels (series__bool)
allow_longs (series__bool)
allow_shorts (series__bool)
override_occured (series__bool)
flip_occured (series__bool)
num_concurrent_wins (series__integer)
num_concurrent_losses (series__integer)
first_entry (|entry_position|#OBJ)
num_win_trades (series__integer)
num_losing_trades (series__integer)
Liquidation Ranges + Volume/OI Dots [Kioseff Trading]Hello!
Introducing a multi-faceted indicator "Liquidation Ranges + Volume Dots" - this indicator replicates the volume dot tools found on various charting platforms and populates a liquidation range on crypto assets!
Features
Volume/OI dots populated according to user settings
Size of volume/OI dots corresponds to degree of abnormality
Naked level volume dots
Fixed range capabilities for volume/OI dots
Visible time range capabilities for volume/OI dots
Lower timeframe data used to discover iceberg orders (estimated using 1-minute data)
S/R lines drawn at high volume/OI areas
Liquidation ranges for crypto assets (10x - 100x)
Liquidation ranges are calculated using a popular crypto exchange's method
# of violations of liquidation ranges are recorded and presented in table
Pertinent high volume/OI price areas are recorded and presented in table
Personalized coloring for volume/OI dots
Net shorts / net long for the price range recorded
Lines shows reflecting net short & net long increases/decreases
Configurable volume/OI heatmap (displayed between liquidation ranges)
And some more (:
Liquidation Range
The liquidation range component of the indicator uses a popular crypto exchange's calculation (for liquidation ranges) to populate the chart for where 10x - 100x leverage orders are stopped out.
The image above depicts features corresponding to net shorts and net longs.
The image above shows features corresponding to liquidation zones for the underlying coin.
The image above shows the option to display volume/oi delta at the time the corresponding grid was traded at.
The image above shows an instance of using the "fixed range" feature for the script.
*The average price of the range is calculated to project liquidation zones.
*Heatmap is calculated using OI (or volume) delta.
Huge thank you to Pine Wizard @DonovanWall for his range filter code!
Price ranges are automatically detected using his calculation (:
Volume / OI Dots
Similar to other charting platforms, the volume/OI dots component of the indicator distinguishes "abnormal" changes in volume/OI; the detected price area is subsequently identified on the chart.
The detection method uses percent rank and calculates on the last bar of the chart. The "agelessness" of detection is contingent on user settings.
The image above shows volume dots in action; the size of each volume dot corresponds to the amount of volume at the price area.
Smaller dots = lower volume
Larger dots = higher volume
The image above exemplifies the highest aggression setting for volume/OI dot detection.
The table oriented top-right shows the highest volume areas (discovered on the 1-minute chart) for the calculated period.
The open interest change and corresponding price level are also shown. Results are listed in descending order but can also be listed in order of occurrence (most relevant).
Additionally, you can use the visible time range feature to detect volume dots.
The feature shows and explains how the visible range feature works. You select how many levels you want to detect and the script will detect the selected number of levels.
For instance, if I select to show 20 levels, the script will find the 20 highest volume/OI change price areas and distinguish them.
The image above shows a narrower price range.
The image above shows the same price range; however, the script is detecting the highest OI change price areas instead of volume.
* You can also set a fixed range with this feature
* Naked levels can be used
Additionally, you can select for the script to show only the highest volume/ OI change price area for each bar. When active, the script will successively identify the highest volume / OI change price area for the most recent bars.
Naked Levels
The image above shows and explains how naked levels can be detected when using the script.
And that's pretty much it!
Of course, there're a few more features you can check out when you use the script that haven't been explained here (:
Thank you again to @DonovanWall
Thank you to @Trendoscope for his binary insertion sort library (:
Thank you to @PineCoders for their time library
Thank you for checking this out!
Modern Portfolio Management IndicatorAfter weeks of grueling over this indicator, I am excited to be releasing it!
Intro:
This is not a sexy, technical or math based indicator that will give you buy and sell signals or anything fancy, but it is an indicator that I created in hopes to bridge a gap I have noticed. That gap is the lack of indicators and technical resources for those who also like to plan their investments. This indicator is tailored to those who are either established investors and to those who are looking to get into investing but don't really know where to start.
The premise of this indicator is based on Modern Portfolio Theory (MPT). Before we get into the indicator itself, I think its important to provide a quick synopsis of MPT.
About MPT:
Modern Portfolio Theory (MPT) is an investment framework that was developed by Harry Markowitz in the 1950s. It is based on the idea that an investor can optimize their investment portfolio by considering the trade-off between risk and return. MPT emphasizes diversification and holds that the risk of an individual asset should be assessed in the context of its contribution to the overall portfolio's risk. The theory suggests that by diversifying investments across different asset classes with varying levels of risk, an investor can achieve a more efficient portfolio that maximizes returns for a given level of risk or minimizes risk for a desired level of return. MPT also introduced the concept of the efficient frontier, which represents the set of portfolios that offer the highest expected return for a given level of risk. MPT has been widely adopted and used by investors, financial advisors, and portfolio managers to construct and manage portfolios.
So how does this indicator help with MPT?
The thinking and theory that went behind this indicator was this: I wanted an indicator, or really just a "way" to test and back-test ticker performance over time and under various circumstances and help manage risk.
Over the last 3 years we have seen a massive bull market, followed by a pretty huge bear market, followed by a very unexpected bull market. We have been and continue to be plagued with economic and political uncertainty that seems to constantly be looming over everyone with each waking day. Some people have liquidated their retirement investments, while others are fomoing in to catch this current bull run. But which tickers are sound and how tickers and funds have compared amongst each other remains somewhat difficult to ascertain, absent manually reviewing and calculating each ticker individually.
That is where this indicator comes in. This indicator permits the user to define up to 5 equities that they are potentially interested in investing in, or are already invested in. The user can then select a specific period in time, say from the beginning of 2022 till now. The user can then define how much they want to invest in each company by number of shares, so if they want to buy 1 share a week, or 2 shares a month, they can input these variables into the indicator to draw conclusions. As many brokers are also now permitting fractional share trading, this ability is also integrated into the indicator. So for shares, you can put in, say, 0.25 shares of SPY and the indicator will accept this and account for this fractional share.
The indicator will then show you a portfolio summary of what your earnings and returns would be for the defined period. It will provide a percent return as well as the projected P&L based on your desired investment amount and frequency.
But it goes beyond just that, you can also have the indicator display a simple forecasting projection of the portfolio. It will show the projected P&L and % Return over various periods in time on each of the ticker (see image below):
The indicator will also break down your portfolio allocation, it will show where the majority of your holdings are and where the majority of your P&L in coming from (best performers will show a green fill and worst will show a red fill, see image below):
This colour coding also extends to the portfolio breakdown itself.
Dollar cost averaging (DCA) is incorporated into the indicator itself, by assuming ongoing contributions. If you want to stop contributions at a certain point, you just select your end time for contributions at the point in which you would stop contributing.
The indicator also provides some basic fundamental information about the company tickers (if applicable). Simply select the "Fundamental" chart and it will display a breakdown of the fundamentals, including dividends paid, market cap and earnings yield:
The indicator also provides a correlation assessment of each holding against each other holding. This emphasizes the profound role of diversification on portfolios. The less correlation you have in your portfolio among your holdings, the better diversified you are. As well, if you have holdings that are perfectly inverse other holdings, you have a pseudo hedge against the downturn of one of your holdings. This is even more helpful if the inverse is a company with solid fundamentals.
In the below example you will see NASDAQ:IRDM in the portfolio. You will be able to see that NASDAQ:IRDM has a slight inverse relationship to SPY:
Yet IRDM has solid fundamentals and is performing well fundamentally. Thus, this makes IRDIM a solid addition to your portfolio as it can potentially hedge against a downturn for SPY and is less risky than simply holding an inverse leveraged share on SPY which is most likely just going to cost you money than make you money.
Concluding remarks:
There are many fun and interesting things you can do with this indicator and I encourage you to try it out and have fun with it! The overall objective with the indicator is to help you plan for your portfolio and not necessarily to manage your portfolio. If you have a few stocks you are looking at and contemplating investing in, this will help you run some theoretical scenarios with this stock based on historical performance and also help give you a feel of how it will perform in the future based on past behaviour.
It is important to remember that past behaviour does not indicate future behaviour, but the indicator provides you with tools to get a feel for how a stock has performed under various circumstances and get a general feel of the fundamentals of the company you could potentially be investing in.
Please note, this indicator is not meant to replace full, fundamental analyses of individual companies. It is simply meant to give you a "gist" of how companies are fundamentally and how they have performed historically.
I hope you enjoy it!
Safe trades everyone!
Flag FinderFlag Finder Indicator is a technical analysis tool to identify bull and bear flags.
What are flags
Flags are continuation patterns that occur within the general trend of the security. A bull flag represents a temporary pause or consolidation before price resumes it's upward movement, while a bear flag occurs before price continues its downward movement.
Both flag patterns consist of two components:
The Pole
The Flag
The pole is the initial strong upward surge or decline that precedes the flag. The pole is usually a fast move accompanied by heavy volume signaling significant buying or selling pressure.
The flag is then formed as price consolidates after the initial surge or decline from the pole. For a bull flag price will drift slightly downward to sideways, a bear flag will drift upward to sideways. The best flags often see volume dry up during this phase of the pattern.
Indicator Settings
Both components are fully customizable in the indicator so the user can adjust for any time frame or volatility. Select the minimum and maximum accepted limits from the % gain loss required for the pole, the maximum acceptable flag depth or rally and the minimum and maximum number of bars for each component.
Colors and what components are visible at any time are also user controlled.
Trading flags
Traders typically use flags to enter on breakouts. A breakout occurs when price moves above the left side high of a bull flag or below the left side low of a bear flag.
Alerts
The Flag Finder allows for four different types of alerts
New Bull Flag
New Bear Flag
Bull Flag Breakout
Bear Flag Breakout
Pine Script
On top of the indicator identifying bull and bear flags, throughout the source code I left notes on nearly every line to help anyone who is interested in pine script see my thought process and explain which each line of code does. This code isn't too complex, but it offers a look into many different concepts one might use when writing pinescript such as:
input groups
declaring and reassigning variables
for loops
plotshapes & lines
alerts
Volume Forks [Trendoscope]🎲 Volume Forks - Advanced Price Analysis with Recursive Auto-Pitchfork and Angled Volume Profile
The Volume Forks Indicator is a comprehensive research tool that combines two innovative techniques, Recursive Auto-Pitchfork and Angled Volume Profile . This indicator provides traders with valuable insights into price dynamics by integrating accurate pitchfork drawing and volume analysis over angled levels. The indicator does following things
Detects Pitchfork formations automatically on the chart over Recursive Zigzag
Instead of drawing forks based on fib levels, volume distribution over ABC of pitchfork is calculated and drawn in the direction of the handle.
🎲 Brief about Pitchfork
Pitchfork is drawn when price forms ABC pattern. Pitchfork draws a series of parallel lines in the direction of trend which can be used for support and resistance.
There are many methods of drawing pitchfork. In all cases, a line joining BC will make the base of pitchfork and fork lines are drawn from different points of the base. All the fork lines will be parallel. But, the handle of the base defines the direction of fork lines. Classification of pitchfork is mainly based on the starting and ending points of the handle.
🎲 Regular Types
Here, end of the handle is always fixed and it will be the mid point of B and C.
🎯 Andrews Pitchfork
Handle starts from A and joins the base at mid of B and C.
Forks are drawn based on fib ratios from the handle
🎯 Schiff Pitchfork
Handle starts from Bar of A and price of middle of AB and joins the base at mid of B and C
Forks are drawn based on fib ratios from the handle
🎯 Modified Schiff Pitchfork
Handle starts from mid of A and B and joins the base at mid of B and C
Forks are drawn based on fib ratios from the handle
🎲 Inside Types
Here, C will act as end of the handle which joins the Base BC .
🎯 Andrews Pitchfork (Inside)
Handle starts from A and joins the base at C
Forks are drawn based on fib ratios from the handle
🎯 Schiff Pitchfork (Inside)
Handle starts from Bar of A and price of (A+B)/2 and joins the base at C
Forks are drawn based on fib ratios from the handle
🎯 Modified Schiff Pitchfork (Inside)
Handle starts from mid of A and B and joins the base at C
Forks are drawn based on fib ratios from the handle
🎲 Brief about Pitchfork
The Angled Volume Profile technique expands on the concept of volume profile by measuring volume distribution levels over angled levels rather than just horizontal levels. By selecting a starting point and angle interactively, traders can assess volume distribution within specific price trends. This feature is particularly useful for analysing volume dynamics in trending markets.
🎲 Settings
Indicator settings include few things which determine the scanning of pitchforks and few which determines drawing of volume profile lines.
Please note that, due to pine limitations of 500 lines, if there are too many formations on the chart, volume profile may not appear correctly. If that happens, please reduce the number of volume forks per formation.
Developing Market Profile / TPO [Honestcowboy]The Developing Market Profile Indicator aims to broaden the horizon of Market Profile / TPO research and trading. While standard Market Profiles aim is to show where PRICE is in relation to TIME on a previous session (usually a day). Developing Market Profile will change bar by bar and display PRICE in relation to TIME for a user specified number of past bars.
What is a market profile?
"Market Profile is an intra-day charting technique (price vertical, time/activity horizontal) devised by J. Peter Steidlmayer. Steidlmayer was seeking a way to determine and to evaluate market value as it developed in the day time frame. The concept was to display price on a vertical axis against time on the horizontal, and the ensuing graphic generally is a bell shape--fatter at the middle prices, with activity trailing off and volume diminished at the extreme higher and lower prices."
For education on market profiles I recommend you search the net and study some profitable traders who use it.
Key Differences
Does not have a value area but distinguishes each column in relation to the biggest column in percentage terms.
Updates bar by bar
Does not take sessions into account
Shows historical values for each bar
While there is an entire education system build around Market Profiles they usually focus on a daily profile and in some cases how the value area develops during the day (there are indicators showing the developing value area).
The idea of trading based on a developing value area is what inspired me to build the Developing Market Profile.
🟦 CALCULATION
Think of this Developing Market Profile the same way as you would think of a moving average. On each bar it will lookback 200 bars (or as user specified) and calculate a Market Profile from those bars (range).
🔹Market Profile gets calculated using these steps:
Get the highest high and lowest low of the price range.
Separate that range into user specified amount of price zones (all spaced evenly)
Loop through the ranges bars and on each bar check in which price zones price was, then add +1 to the zones price was in (we do this using the OccurenceArray)
After it looped through all bars in the range it will draw columns for each price zone (using boxes) and make them as wide as the OccurenceArray dictates in number of bars
🔹Coloring each column:
The script will find the biggest column in the Profile and use that as a reference for all other columns. It will then decide for each column individually how big it is in % compared to the biggest column. It will use that percentage to decide which color to give it, top 20% will be red, top 40% purple, top 60% blue, top 80% green and all the rest yellow. The user is able to adjust these numbers for further customisation.
The historical display of the profiles uses plotchar() and will not only use the color of the column at that time but the % rating will also decide transparancy for further detail when analysing how the profiles developed over time. Each of those historical profiles is calculated using its own 200 past bars. This makes the script very heavy and that is why it includes optimisation settings, more info below.
🟦 USAGE
My general idea of the markets is that they are ever changing and that in studying that changing behaviour a good trader is able to distinguish new behaviour from old behaviour and adapt his approach before losing traders "weak hands" do.
A Market Profile can visually show a trader what kind of market environment we currently are in. In training this visual feedback helps traders remember past market environments and how the market behaved during these times.
Use the history shown using plotchars in colors to get an idea of how the Market Profile looked at each bar of the chart.
This history will help in studying how price moves at different stages of the Market Profile development.
I'm in no way an expert in trading Market Profiles so take this information with a grain of salt. Below an idea of how I would trade using this indicator:
🟦 SETTINGS
🔹MARKET PROFILING
Lookback: The amount of bars the Market Profile will look in the past to calculate where price has been the most in that range
Resolution: This is the amount of columns the Market Profile will have. These columns are calculated using the highest and lowest point price has been for the lookback period
Resolution is limited to a maximum of 32 because of pinescript plotting limits (64). Each plotchar() because of using variable colors takes up 2 of these slots
🔹VISUAL SETTINGS
Profile Distance From Chart: The amount of bars the market profile will be offset from the current bar
Border width (MP): The line thickness of the Market Profile column borders
Character: This is the character the history will use to show past profiles, default is a square.
Color theme: You can pick 5 colors from biggest column of the Profile to smallest column of the profile.
Numbers: these are for % to decide column color. So on default top 20% will be red, top 40% purple... Always use these in descending order
Show Market Profile: This setting will enable/disable the current Market Profile (columns on right side of current bar)
Show Profile History: This setting will enable/disable the Profile History which are the colored characters you see on each bar
🔹OPTIMISATION AND DEBUGGING
Calculate from here: The Market Profile will only start to calculate bar by bar from this point. Setting is needed to optimise loading time and quite frankly without it the script would probably exceed tradingview loading time limits.
Min Size: This setting is there to avoid visual bugs in the script. Scaling the chart there can be issues where the Market Profile extends all the way to 0. To avoid this use a minimum size bigger than the bugged bottom box
Cycles AnalysisI strongly believe in cycles, so I wanted to create something that would give a visual representation of bull/bear markets and give a prediction based on the previous data. It's up to you how to decide what is a bull/bear cycle. There is no single rule for all assets because 20% drop in SP500 starts a bear market in traditional markets, while 35% drop for Bitcoin is a Tuesday. You have two options on how to decide when markets turn: either by a % change (traditional definition) or if there is no new high/low after X days. A softer version to show periods of no new highs/lows is to use the Stagnation option. Stagnation periods hava the same logic as the cycle change by X days: if there is no new high/low then we treat this period as a stagnation. The difference is that stagnation periods do not change cycle directions and do not participate in calculations.
The script also draws a possible "predictions" zone where the current cycle might end up. There is no magic here, it just takes previous cycles' size to draw the possible boundaries. If you decide to use percentiles then the box area will be taken from the percentiles calculations, otherwise it will come from the full data. "x" in the predictions zone represents a target mean (average) value, "o" represents a target median value.
A few things to keep in mind:
- this script is not supposed to be used in trading. It was created for analysis. It repaints. And when I say "it repaints" - it might like repaint the last 6 months of data if a new low comes and we are in a stagnation period (aka not a financial advice).
- it doesn't work with replays as it does calculations only once on the last candle.
- you need at least 3 periods to be able to calculate percentiles. And after this it will remove at least 1 period on each side. Which means that 90 percentile will not be a real 90 percentile until you have enough periods for it to be (20 in this specific case).
- it assumes that a year = 360 days, and a month = 30 days. So the duration presentation might not be exact, until you move to the day level.
- I had macro analysis in mind when I created the script, but nothing stops you from using it in a 1m time frame for BTC. Just change the time duration presentation.
- the last period is not finished, so it doesn't participate in calculations.
Liquidity Sentiment Profile [LuxAlgo]The Liquidity Sentiment Profile is an advanced charting tool that measures by combining PRICE and VOLUME data over specified anchored periods and highlights within a sequence of profiles the distribution of the liquidity and the market sentiment at specific price levels.
The Liquidity Sentiment Profile allows traders to reveal significant price levels, dominant market sentiment, support and resistance levels, supply and demand zones, liquidity availability levels, liquidity gaps, consolidation zones, and more based on price and volume data.
Liquidity refers to the availability of orders at specific price levels in the market, allowing transactions to occur smoothly.
🔶 USAGE
A Liquidity Sentiment Profile is a combination of a liquidity and a sentiment profile, where the right part of the profile displays the distribution of the traded activity at different price levels and the left part displays the market sentiment at those price levels.
The Liquidity Sentiment Profiles are visualized with different colors, where each color has a different meaning.
The Liquidity Sentiment Profiles aim to present Value Areas based on the significance of price levels, thus allowing users to identify value areas that can be formed more than once within the range of a single profile.
Level of Significance Line - displays the changes in the price levels with the highest traded activity (developing POC)
🔶 SETTINGS
The script takes into account user-defined parameters and plots the profiles, where detailed usage for each user-defined input parameter in indicator settings is provided with the related input's tooltip.
🔹 Liquidity Sentiment Profiles
Anchor Period: The indicator resolution is set by the input of the Anchor Period, the default option is AUTO.
🔹 Liquidity Profile Settings
Liquidity Profile: Toggles the visibility of the Liquidity Profiles
High Traded Nodes: Threshold and Color option for High Traded Nodes
Average Traded Nodes: Color option for Average Traded Nodes
Low Traded Nodes: Threshold and Color option for Low Traded Nodes
🔹 Sentiment Profile Settings
Sentiment Profile: Toggles the visibility of the Sentiment Profiles
Bullish Nodes: Color option for Bullish Nodes
Bearish Nodes: Color option for Bearish Nodes
🔹 Other Settings
Level of Significance: Toggles the visibility of the Level of Significance Line
Profile Price Levels: Toggles the visibility of the Profile Price Levels
Number of Rows: Specify how many rows each profile histogram will have. Caution, having it set to high values will quickly hit Pine Script™ drawing objects limit and fewer historical profiles will be displayed
Profile Width %: Alters the width of the rows in the histogram, relative to the profile length
Profile Range Background Fill: Toggles the visibility of the Profiles Range
🔶 LIMITATIONS
The amount of drawing objects that can be used is limited, as such using a high number of rows can display fewer historical profiles and occasionally incomplete profiles.
🔶 RELATED SCRIPTS
🔹 Buyside-Sellside-Liquidity
🔹 ICT-Concepts
🔹 Swing-Volume-Profiles
Trend Correlation HeatmapHello everyone!
I am excited to release my trend correlation heatmap, or trend heatmap for short.
Per usual, I think its important to explain the theory before we get into the use of the indicator, so let's get into the theory!
The theory:
So what is a correlation?
Correlation is the relationship one variable has to another. Correlations are the basis of everything I do as a quantitative trader. From the correlation between the same variables (i.e. autocorrelation), the correlation between other variables (i.e. VIX and SPY, SPY High and SPY Low, DXY and ES1! close, etc.) and, as well, the correlation between price and time (time series correlation).
This may sound very familiar to you, especially if you are a user, observer or follower of my ideas and/or indicators. Ninety-five percent of my indicators are a function of one of those three things. Whether it be a time series based indicator (i.e.my time series indicator), whether it be autocorrelation (my autoregressive cloud indicator or my autocorrelation oscillator) or whether it be regressive in nature (i.e. my SPY Volume weighted close, or even my expected move which uses averages in lieu of regressive approaches but is foundational in regression principles. Or even my VIX oscillator which relies on the premise of correlations between tickers.) So correlation is extremely important to me and while its true I am more of a regression trader than anything, I would argue that I am more of a correlation trader, because correlations are the backbone of how I develop math models of stocks.
What I am trying to stress here is the importance of correlations. They really truly are foundational to any type of quantitative analysis for stocks. And as such, understanding the current relationship a stock has to time is pivotal for any meaningful analysis to be conducted.
So what is correlation to time and what does it tell us?
Correlation to time, otherwise known and commonly referred to as "Time Series", is the relationship a ticker's price has to the passing of time. It is displayed in the traditional Pearson Correlation Coefficient or R value and can be any value from -1 (strong negative relationship, i.e. a strong downtrend) to + 1 (i.e. a strong positive relationship, i.e. a strong uptrend). The higher or lower the value the stronger the up or downtrend is.
As such, correlation to time tells us two very important things. These are:
a) The direction of the stock; and
b) The strength of the trend.
Let's take a look at an example:
Above we have a chart of QQQ. We can see a trendline that seems to fit well. The questions we ask as traders are:
1. What is the likelihood QQQ breaks down from this trendline?
2. What is the likelihood QQQ continues up?
3. What is the likelihood QQQ does a false breakdown?
There are numerous mathematical approaches we can take to answer these questions. For example, 1 and 2 can be answered by use of a Cumulative Distribution Density analysis (CDDA) or even a linear or loglinear regression analysis and 3 can be answered, more or less, with a linear regression analysis and standard error ascertainment, or even just a general comparison using a data science approach (such as cosine similarity or Manhattan distance).
But, the reality is, all 3 of these questions can be visualized, at least in some way, by simply looking at the correlation to time. Let's look at this chart again, this time with the correlation heatmap applied:
If we look at the indicator we can see some pivotal things. These are:
1. We have 4, very strong uptrends that span both higher AND lower timeframes. We have a strong uptrend of 0.96 on the 5 minute, 50 candle period. We have a strong uptrend at the 300 candle lookback period on the 1 minute, we have a strong uptrend on the 100 day lookback on the daily timeframe period and we have a strong uptrend on the 5 minute on the 500 candle lookback period.
2. By comparison, we have 3 downtrends, all of which have correlations less than the 4 uptrends. All of the downtrends have a correlation above -0.8 (which we would want lower than -0.8 to be very strong), and all of the uptrends are greater than + 0.80.
3. We can also see that the uptrends are not confined to the smaller timeframes. We have multiple uptrends on multiple timeframes and both short term (50 to 100 candles) and long term (up to 500 candles).
4. The overall trend is strengthening to the upside manifested by a positive Max Change and a Positive Min change (to be discussed later more in-depth).
With this, we can see that QQQ is actually very strong and likely will continue at least some upside. If we let this play out:
We continued up, had one test and then bounced.
Now, I want to specify, this indicator is not a panacea for all trading. And in relation to the 3 questions posed, they are best answered, at least quantitatively, not only by correlation but also by the aforementioned methods (CDDA, etc.) but correlation will help you get a feel for the strength or weakness present with a stock.
What are some tangible applications of the indicator?
For me, this indicator is used in many ways. Let me outline some ways I generally apply this indicator in my day and swing trading:
1. Gauging the strength of the stock: The indictor tells you the most prevalent behavior of the stock. Are there more downtrends than uptrends present? Are the downtrends present on the larger timeframes vs uptrends on the shorter indicating a possible bullish reversal? or vice versa? Are the trends strengthening or weakening? All of these things can be visualized with the indicator.
2. Setting parameters for other indicators: If you trade EMAs or SMAs, you may have a "one size fits all" approach. However, its actually better to adjust your EMA or SMA length to the actual trend itself. Take a look at this:
This is QQQ on the 1 hour with the 200 EMA with 200 standard deviation bands added. If we look at the heatmap, we can see, yes indeed 200 has a fairly strong uptrend correlation of 0.70. But the strongest hourly uptrend is actually at 400 candles, with a correlation of 0.91. So what happens if we change the EMA length and standard deviation to 400? This:
The exact areas are circled and colour coded. You can see, the 400 offers more of a better reference point of supports and resistances as well as a better overall trend fit. And this is why I never advocate for getting married to a specific EMA. If you are an EMA 200 lover or 21 or 51, know that these are not always the best depending on the trend and situation.
Components of the indicator:
Ah okay, now for the boring stuff. Let's go over the functionality of the indicator. I tried to keep it simple, so it is pretty straight forward. If we open the menu here are our options:
We have the ability to toggle whichever timeframes we want. We also have the ability to toggle on or off the legend that displays the colour codes and the Max and Min highest change.
Max and Min highest change: The max and min highest change simply display the change in correlation over the previous 14 candles. An increasing Max change means that the Max trend is strengthening. If we see an increasing Max change and an increasing Min change (the Min correlation is moving up), this means the stock is bullish. Why? Because the min (i.e. ideally a big negative number) is going up closer to the positives. Therefore, the downtrend is weakening.
If we see both the Max and Min declining (red), that means the uptrend is weakening and downtrend is strengthening. Here are some examples:
Final Thoughts:
And that is the indicator and the theory behind the indicator.
In a nutshell, to summarize, the indicator simply tracks the correlation of a ticker to time on multiple timeframes. This will allow you to make judgements about strength, sentiment and also help you adjust which tools and timeframes you are using to perform your analyses.
As well, to make the indicator more user friendly, I tried to make the colours distinctively different. I was going to do different shades but it was a little difficult to visualize. As such, I have included a toggle-able legend with a breakdown of the colour codes!
That's it my friends, I hope you find it useful!
Safe trades and leave your questions, comments and feedback below!