### Multifractality

A multifractal system is a generalization of a fractal system in which a single exponent (the fractal dimension) is not enough to describe its dynamics; instead, a continuous spectrum of exponents (the so-called singularity spectrum) is needed.[1]

Multifractal systems are common in nature, especially geophysics. They include fully developed turbulence, stock market time series, real world scenes, the Sun’s magnetic field time series, heartbeat dynamics, human gait, and natural luminosity time series. Models have been proposed in various contexts ranging from turbulence in fluid dynamics to internet traffic, finance, image modeling, texture synthesis, meteorology, geophysics and more. The origin of multifractality in sequential (time series) data may be attributed, in part, to a mathematical convergence effect related to the central limit theorem that has as its focus a family of statistical distributions known as the Tweedie exponential dispersion models.[2]

From a practical perspective, multifractal analysis uses the mathematical basis of multifractal theory to investigate datasets, often in conjunction with other methods of fractal analysis and lacunarity analysis. The technique entails distorting datasets extracted from patterns to generate multifractal spectra that illustrate how scaling varies over the dataset. The techniques of multifractal analysis have been applied in a variety of practical situations such as predicting earthquakes and interpreting medical images.[3][4][5]

## Definition

In a multifractal system $s$, the behavior around any point is described by a local power law:

$s\left(\vec\left\{x\right\}+\vec\left\{a\right\}\right)-s\left(\vec\left\{x\right\}\right) \sim a^\left\{h\left(\vec\left\{x\right\}\right)\right\}.$

The exponent $h\left(\vec\left\{x\right\}\right)$ is called the singularity exponent, as it describes the local degree of singularity or regularity around the point $\vec\left\{x\right\}$.

The ensemble formed by all the points that share the same singularity exponent is called the singularity manifold of exponent h, and is a fractal set of fractal dimension D(h). The curve D(h) versus h is called the singularity spectrum and fully describes the (statistical) distribution of the variable $s$.

In practice, the multifractal behaviour of a physical system $X$ is not directly characterized by its singularity spectrum D(h). Data analysis rather gives access to the multiscaling exponents $\zeta\left(q\right),\ q\in\left\{\mathbb R\right\}$. Indeed, multifractal signals generally obey a scale invariance property which yields power law behaviours for multiresolution quantities depending on their scale $a$. Depending on the object under study, these multiresolution quantities, denoted by $T_X\left(a\right)$ in the following, can be local averages in boxes of size $a$, gradients over distance $a$, wavelet coefficients at scale $a$... For multifractal objects, one usually observes a global power law scaling of the form:

$\langle T_X\left(a\right)^q \rangle \sim a^\left\{\zeta\left(q\right)\right\}\$

at least in some range of scales and for some range of orders $q$. When such a behaviour is observed, one talks of scale invariance, self-similarity or multiscaling.[6]

## Estimation

Using the so-called multifractal formalism, it can be shown that, under some well-suited assumptions, there exists a correspondence between the singularity spectrum $D\left(h\right)$ and the multiscaling exponents $\zeta\left(q\right)$ through a Legendre transform. While the determination of $D\left(h\right)$ calls for some exhaustive local analysis of the data, which would result in difficult and numerically unstable calculations, the estimation of the $\zeta\left(q\right)$ relies on the use of statistical averages and linear regressions in log-log diagrams. Once the $\zeta\left(q\right)$ are known, one can deduce an estimate of $D\left(h\right)$ thanks to a simple Legendre transform.

Multifractal systems are often modeled by stochastic processes such as multiplicative cascades. Interestingly, the $\zeta\left(q\right)$ receives some statistical interpretation as they characterize the evolution of the distributions of the $T_X\left(a\right)$ as $a$ goes from larger to smaller scales. This evolution is often called statistical intermittency and betrays a departure from Gaussian models.

Modelling as a [1]).

## Practical application of multifractal spectra

Multifractal analysis has been used in several fields in science to characterize various types of datasets.[7] In essence, multifractal analysis applies a distorting factor to datasets extracted from patterns, to compare how the data behave at each distortion. This is done using graphs known as multifractal spectra that illustrate how the distortions affect the data, analogous to viewing the dataset through a "distorting lens" as shown in the illustration.[8] Several types of multifractal spectra are used in practise.

### DQ vs Q

One practical multifractal spectrum is the graph of DQ vs Q, where DQ is the generalized dimension for a dataset and Q is an arbitrary set of exponents. The expression generalized dimension thus refers to a set of dimensions for a dataset (detailed calculations for determining the generalized dimension using box counting are described below).

#### Dimensional ordering

The general pattern of the graph of DQ vs Q can be used to assess the scaling in a pattern. The graph is generally decreasing, sigmoidal around Q=0, where D(Q=0) ≥ D(Q=1) ≥ D(Q=2). As illustrated in the figure, variation in this graphical spectrum can help distinguish patterns. The image shows D(Q) spectra from a multifractal analysis of binary images of non-, mono-, and multi-fractal sets. As is the case in the sample images, non- and mono-fractals tend to have flatter D(Q) spectra than multifractals.

The generalized dimension also offers some important specific information. D(Q=0) is equal to the Capacity Dimension, which in the analysis shown in the figures here is the box counting dimension. D(Q=1) is equal to the Information Dimension, and D(Q=2) to the Correlation Dimension. This relates to the "multi" in multifractal whereby multifractals have multiple dimensions in the D(Q) vs Q spectra but monofractals stay rather flat in that area.[8][9]

### $f\left(\alpha\right)$ vs $\alpha$

Another useful multifractal spectrum is the graph of $f\left(\alpha\right)$ vs $\alpha$ (see calculations). These graphs generally rise to a maximum that approximates the fractal dimension at Q=0, and then fall. Like DQ vs Q spectra, they also show typical patterns useful for comparing non-, mono-, and multi-fractal patterns. In particular, for these spectra, non- and mono-fractals converge on certain values, whereas the spectra from multifractal patterns are typically humped over a broader extent.

## Estimating multifractal scaling from box counting

Multifractal spectra can be determined from box counting on digital images. First, a box counting scan is done to determine how the pixels are distributed; then, this "mass distribution" becomes the basis for a series of calculations.[8][9][10] The chief idea is that for multifractals, the probability, $P$, of a number of pixels, $m$, appearing in a box, $i$, varies as box size, $\textstyle\epsilon$, to some exponent, $\textstyle\alpha$, which changes over the image, as in Eq.0.0. NB: For monofractals, in contrast, the exponent does not change meaningfully over the set. $P$ is calculated from the box counting pixel distribution as in Eq.2.0.

 $P_ \varpropto \epsilon^\left\{-\alpha_i\right\} \therefore\alpha_i \varpropto \frac\left\{\log\left\{P_\right\}\right)$
}{\log{\epsilon^{-1}}}|Eq.0.0}}
$\epsilon$ = an arbitrary scale (box size in box counting) at which the set is examined
$i$ = the index for each box laid over the set for an $\epsilon$
$m_$ = the number of pixels or mass in any box, $i$, at size $\epsilon$
$N_\epsilon$ = the total boxes that contained more than 0 pixels, for each $\epsilon$

Eq.1.0

(})

}

(})

{M_\epsilon} = the probability of this mass at $i$ relative to the total mass for a box size|Eq.2.0}}

$P$ is used to observe how the pixel distribution behaves when distorted in certain ways as in Eq.3.0 and Eq.3.1:

$Q$ = an arbitrary range of values to use as exponents for distorting the data set

Eq.3.0

(})

• When $Q=1$, Eq.3.0 equals 1, the usual sum of all probabilities, and when $Q=0$, every term is equal to 1, so the sum is equal to the number of boxes counted, $N_\epsilon$.

Eq.3.1

(})

These distorting equations are further used to address how the set behaves when scaled or resolved or cut up into a series of $\epsilon$-sized pieces and distorted by Q, to find different values for the dimension of the set, as in the following:

• An important feature of Eq.3.0 is that it can also be seen to vary according to scale raised to the exponent $\textstyle\tau$ in Eq.4.0:

$I_\right\} \varpropto \epsilon^\left\{\tau_\left\{\left(Q\right)$

(})

|Eq.4.0}}

Thus, a series of values for $\tau_\left\{\left(Q\right)\right\}$ can be found from the slopes of the regression line for the log of Eq.3.0 vs the log of $\epsilon$ for each $Q$, based on Eq.4.1:

}

(})

{ln{\epsilon}} \right ]}} |Eq.4.1}}
• For the generalized dimension:

}

(})

{ln{\epsilon^{-1}}} \right ]}} {(1-Q)^{-1}} |Eq.5.0}}

}

(})

{Q-1}|Eq.5.1}}

Eq.5.2

(})

Eq.5.3

(})

• $\textstyle\alpha_\left\{\left(Q\right)\right\}$ is estimated as the slope of the regression line for log A$\epsilon$,Q vs log $\epsilon$ where:

}

(})

|Eq.6.0}}
• Then $f_\left\{\left\left(\alpha_$Template:(Q)\right)} is found from Eq.5.3.
• The mean $\textstyle\tau_\left\{\left(Q\right)\right\}$ is estimated as the slope of the log-log regression line for $\textstyle\tau_\right\}$ vs $\textstyle\epsilon$, where:
 $\tau_\left\{\left(Q\right)_\right\}\right)$
= \frac{\sum_{i=1}^{N_\epsilon} {P_^{Q-1}}} {N_\epsilon} |Eq.6.1}}

In practise, the probability distribution depends on how the dataset is sampled, so optimizing algorithms have been developed to ensure adequate sampling.[8]