Skip to main content
Log in

Multistability in neural systems with random cross-connections

  • Original Article
  • Published:
Biological Cybernetics Aims and scope Submit manuscript

Abstract

Neural circuits with multiple discrete attractor states could support a variety of cognitive tasks according to both empirical data and model simulations. We assess the conditions for such multistability in neural systems using a firing rate model framework, in which clusters of similarly responsive neurons are represented as single units, which interact with each other through independent random connections. We explore the range of conditions in which multistability arises via recurrent input from other units while individual units, typically with some degree of self-excitation, lack sufficient self-excitation to become bistable on their own. We find many cases of multistability—defined as the system possessing more than one stable fixed point—in which stable states arise via a network effect, allowing subsets of units to maintain each others' activity because their net input to each other when active is sufficiently positive. In terms of the strength of within-unit self-excitation and standard deviation of random cross-connections, the region of multistability depends on the response function of units. Indeed, multistability can arise with zero self-excitation, purely through zero-mean random cross-connections, if the response function rises supralinearly at low inputs from a value near zero at zero input. We simulate and analyze finite systems, showing that the probability of multistability can peak at intermediate system size, and connect with other literature analyzing similar systems in the infinite-size limit. We find regions of multistability with a bimodal distribution for the number of active units in a stable state. Finally, we find evidence for a log-normal distribution of sizes of attractor basins, which produces Zipf’s Law when enumerating the proportion of trials within which random initial conditions lead to a particular stable state of the system.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Data availability

MATLAB codes used to produce the results in this paper are available for public download at https://github.com/primon23/Multistability-Paper.

References

Download references

Acknowledgements

The authors are grateful to NIH-NINDS for support of this work via R01 NS104818 and to the Swartz Foundation for a fellowship to SQ. We acknowledge computational support from the Brandeis HPCC which is partially supported by the NSF through DMR-MRSEC 2011846 and OAC-1920147. SM is grateful to Merav Stern for helpful conversations in the early stages of this work.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to the writing and editing of the manuscript and approved the final version. Simulations were carried out by JB, analysis by SM, SQ, and PM. The first draft was written by PM, who also conceived of the project.

Corresponding author

Correspondence to Paul Miller.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Communicated by Benjamin Lindner.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1: Monte Carlo simulation method

Our standard procedure is to simulate 100 different realizations of the connectivity matrix to produce 100 random networks for a given parameter combination. For each connectivity matrix, we then complete sets of multiple trials, each trial with a distinct initial condition (100 trials for perturbation analysis in Fig. 2 and for scaling \(g\) in Fig. 3; 200 trials for parameter grids in Fig. 4; and 106 or 105 trials, respectively, for the networks with binary units in Figs. 8 and 9). For the small (\(N\le 25\)) networks with binary units in Fig. 8 all \({2}^{N}\) combinations of initial conditions are used with each unit at an initial rate of its minimum or maximum.

The continuous models are simulated using MATLAB’s “ode45” function. Each trial is simulated until either a maximum simulation time is reached (5000 \(\tau \) for Fig. 3 and 10,000 \(\tau \) for Fig. 4), or until a stopping condition is reached in the case that the maximum \({\dot{x}}_{i}\) at a given timestep is less than \({2\times10}^{-6}\). If this stopping condition is reached, then the activity is considered to have reached a stable state because the network possesses a point attractor at that set of firing rates. Logistic units are classified as active if their firing rate exceeds 0.5. Tanh units are considered active if the absolute value of their rate exceeds 0.001. For the continuous models, typically the first trial is initialized with inputs near zero, to test if the quiescent state is stable. For all subsequent trials, the initial rates of the units are set to a uniform random distribution over 0–1 and transformed by a logistic function with \({x}_{th}=0.5\) and \(\Delta =0.1\).

For the perturbation analysis, each trial of each network is simulated for a duration of 21,000 \(\tau \). Then, at each of 100 linearly spaced time points between 20,000 and 20,800 \(\tau \) 10% of the units’ firing rates are randomly perturbed upwards or downwards by \({10}^{-5}\) and the simulation is then continued from each such perturbed state for 200 \(\tau \). The root mean squared (RMS) deviation of the perturbed simulation from the original simulation quantifies the extent to which the perturbation causes a divergence in activity. The median RMS deviation over the 100 perturbations is then used to classify each trial as a point attractor, a limit cycle, or chaotic. The median RMS deviation exponentially decays for point attractors, exponentially increases for chaos, and increases but reaches a plateau at a low level for limit cycles. Classification thresholds are determined by the square of the correlation (R2) of a linear fit to the exponential RMS deviation and the magnitude of the RMS deviation averaged between 190 and 200 \(\tau \) post-perturbation. Trials with final RMS deviations below half the magnitude of the initial perturbation and with no units having a change in their firing rate exceeding 10–4 in the last 10 \(\tau \) of the unperturbed simulation are classified as point attractors. To classify trials as chaotic vs limit cycles, a classification boundary is determined as a function of each trials’ linear fit R2 and final RMS deviation. Trials above the line \(\mathrm{RMS \, deviation}=0.025{e}^{(-0.125 {R}^{2})}\) are classified as chaotic. This boundary allows the separation between these two dynamics because it accounts for both chaotic trials that very quickly converge to a large RMS deviation (large RMS deviation and low R2) and chaotic trials that have a slower exponential increase in their RMS deviation (lower RMS deviation at 190–200 \(\tau \) but high R2). Final activity states of the unperturbed simulations are used to confirm these classifications.

Fig. 10
figure 10

The dynamic regimes of individual networks changes as g is scaled. a Similar to Fig. 3b, but for networks simulated over a smaller range of g. A total of 100 random networks of logistic units (Δ = 0.2) with no self-connections (s = 0) of varying size (N = 10, 50, 100, 500, 1000) are simulated at each value of g. The same network can gain and lose multistability as g varies. Color scale indicates number of point attractors found within 100 trials. White indicates values of g that are not simulated due to computational limits. b The same simulations as in (a), but where color now indicates the classification of the set of activity observed across the 100 simulated trials. 1, no trials converge; 2, stable quiescence + some trials fail to converge; 3, only stable quiescence; 4, only a single stable active state; 5, some trials don’t converge + stable quiescence + at least one stable active state; 6, stable quiescence + a single stable active state; 7, multiple stable active states + no stable quiescence; 8, multiple stable active states + stable quiescence; 9, some trials fail to converge + a single stable active states; 10, some trials don’t converge + multistable

Appendix 2: Choice of single-unit input threshold

For comparison across systems with distinct single-unit response functions, \(f\left(x\right)\), we adjust the offset, \({x}_{th}\), such that a single unit becomes bistable with self-connection strength of \(s=1\), in all cases.

For the logistic response function, such a requirement means that a saddle-node bifurcation occurs at \(s=1\), with unstable and stable fixed points colliding at \({x}^{*}\) given by \({-x}^{*}+sf\left({x}^{*}\right)=0\) such that \({x}^{*}=f\left({x}^{*}\right)\) and \(\frac{\text d}{{\text{d}}x}{\left[-x+sf\left(x\right)\right]}_{{x}^{*}}=0\) such that \({\left.\frac{{\text{d}}f\left(x\right)}{{\text{d}}x}\right|}_{{x}^{*}}=1\). Combining these equations and using the result for the logistic function that \(\frac{{\text{d}}f\left(x\right)}{{\text{d}}x}=\frac{1}{\Delta }f\left(x\right)\left[1-f\left(x\right)\right]\) leads to the requirement:

$${x}_{th}=\frac{1}{2}+\sqrt{\frac{1}{4}-\Delta }+ \Delta {\text{ln}}\Delta -2\Delta {\text{ln}}\left(\frac{1}{2}+\sqrt{\frac{1}{4}-\Delta }\right).$$
(13)

For the binary response function, \(\left(x\right)=\) \({\text{Heaviside}}(x-{x}_{th})\), we have \({x}_{th}=1\), which can be seen from Eq. (13) in the limit \(\Delta \to 0\).

For the hyperbolic tangent response function, \(f\left(x\right)={\text{tanh}}\left(\frac{x-{x}_{th}}{\Delta }\right)\), a similar derivation leads to

$${x}_{th}=\sqrt{1-\Delta }-\frac{\Delta }{2}{\text{ln}}\left(\frac{1+\sqrt{1-\Delta }}{1-\sqrt{1-\Delta }}\right),$$
(14)

which yields \({x}_{th}=0\) if \(\Delta =1\), matching the simplest response function, \(f\left(x\right)={\text{tanh}}\left(x\right)\), and as with the binary response function, \({x}_{th}=1\) if \(\Delta =0\).

Appendix 3: Networks with multiple states and no self-connections

Here, we verify that the majority of simulated networks up to a size of \(N=1000\) show multiple point attractors, while the range of \(g\) in which such multistability exists converges with increasing \(N\) to the narrow range found in our infinite-N analysis (\(1.53<g<1.57)\).

Appendix 4: General mean-field methods

4.1 Self-consistency of solutions

The self-consistent solution of Eqs. 26 is straight forward when \(x-sf\left(x\right)\) is a monotonic function such that there is a one-to-one mapping from \(\eta \) to \(x\) following Eq. 5. The variance, \({\sigma }^{2}\), of the Gaussian distribution of \(\eta \), is the only parameter to be calculated, so the iteration is one-dimensional and in our experience always converges using the MATLAB solver “fzero.” The procedure is, given an initial value of \({\sigma }^{2}\), which defines \(P\left(\eta \right)=\frac{1}{\sqrt{2\pi {\sigma }^{2}}}{\text{exp}}\left(-\frac{{\eta }^{2}}{2{\sigma }^{2}}\right)\) (from Eq. 6) we numerically integrate over \(\eta \). At each value of \(\eta \), we calculate \(x\left(\eta \right)\) by numerically inverting \(x-sf\left(x\right)=\eta \) (from Eq. 5), and hence calculate \(f\left(x\left(\eta \right)\right)\) and \({f}^{2}\left(x\left(\eta \right)\right)\). Multiplication of \({f}^{2}\left(x\left(\eta \right)\right)\) by \(P\left(\eta \right)\) combined with the numerical integration leads to \(\langle {f}^{2}\left( {\sigma }^{2}\right)\rangle \) (where we indicate its dependence on \({\sigma }^{2}\) because \(P\left(\eta \right)\) depends on \({\sigma }^{2}\)). For a self-consistent solution \({\sigma }^{2}={g}^{2}\langle {f}^{2}\left( {\sigma }^{2}\right)\rangle \) (Eq. 4), so we require the solver to find the zero-crossing of \({g}^{2}\langle {f}^{2}\left( {\sigma }^{2}\right)\rangle -{\sigma }^{2}\) when calculated in this manner. We use standard numerical integration grids of 5 × 104 points and test integration grids that are tenfold finer for select parameters to ensure the results are numerically accurate. Since multiple solutions for \({\sigma }^{2}\) are possible, we systematically vary initial conditions and the bounds of \({\sigma }^{2}\) to find zero crossings, to ensure we reach all solutions. We only ever find one solution or three solutions. With three solutions, we only count the lowest and highest values of \({\sigma }^{2}\) as the intermediate value corresponds to an unstable solution of Eq. 2.

In situations where \(x-sf\left(x\right)\) is non-monotonic, across a range of values of \(\eta \) the solution \(x\left(\eta \right)\) is not single-valued. In this case, an additional parameter is the point of the range of \(\eta \) at which we switch from one branch of \(x\) to another when calculating \(x\left(\eta \right)\). We test for solutions with different switching points in order to assess whether any solution is stable, as described in the later subsection, Multiple solutions for \(x\left(\eta \right)\). In such non-monotonic situations, we use finer integration grids of 5 × 105 points because of the sensitivity of \(x\left(\eta \right)\) near turning points of the curve. We also test integration grids that are tenfold finer for select parameters to ensure the results are numerically accurate. (See https://github.com/primon23/Multistability-Paper/ for full code).

4.2 Stability of solutions

To test whether a distribution of the interacting variables, \(x\), produces a stable fixed point, it is necessary to obtain information about the eigenvalues of the Jacobian matrix of the dynamical equations expanded linearly about the fixed point (Strogatz 2015). If all such eigenvalues have a negative real part then the fixed point is stable. Linearization around a fixed point, \({x}^{*}\), yields

$$\frac{{\text{d}}x}{{\text{d}}t}=-x+sDx+\frac{g}{\sqrt{N}}JDx$$
(15)

where \(D\) is a diagonal matrix with elements equal to the corresponding derivatives of the response function, \(f^{\prime}\left({x}^{*}\right)\), and \(J\) is the unit variance, zero mean, Gaussian connectivity matrix.

We follow the methods of others (Ahmadian et al. 2015; Stern et al. 2014) who showed that eigenvalues of such a system are found at the complex values, \(z\), where

$$ {\text{Tr}}\left[ {\left( {M_{z} M_{z}^{\dag } } \right)^{ - 1} } \right] \ge 1 $$

with

$${M}_{z}=\frac{\left(z+1-{W}_{S}^{EE}f^{\prime}\left({x}^{*}\right)\right)}{gf^{\prime}\left({x}^{*}\right)}.$$

In the large-\(N\) limit, the sum within the Trace becomes an integral over the distribution of \(x\), to yield the criterion (Ahmadian et al. 2015; Stern et al. 2014):

$$\int {\rm d}xP\left(x\right)\frac{{g}^{2}{\left[f^{\prime}\left(x\right)\right]}^{2}}{{\left|z+1-sf^{\prime}\left(x\right)\right|}^{2}}=\int {\rm d}\eta P\left(\eta \right)\frac{{g}^{2}{\left[f^{\prime}\left(x\left(\eta \right)\right)\right]}^{2}}{{\left|z+1-sf^{\prime}\left(x\left(\eta \right)\right)\right|}^{2}}\ge 1.$$
(16)

As noted by (Stern et al. 2014), for the system to be stable it is necessary that Eq. (16) is not satisfied for any \(z\) with \(Re\left[z\right]>0\), which allows us to assess the case where \(Re\left[z\right]=0\) and note that any nonzero contribution to \(Im\left[z\right]\) increases the absolute value of the denominator in Eq. 16, so if there are no eigenvalues with \(z=0\) there cannot be any on the imaginary axis. Therefore, in general we require, for there to be no eigenvalues with positive real part, that

$$\frac{1}{\sqrt{2\pi {\sigma }^{2}}}\int {\text{d}}\eta {\text{exp}}\left(-\frac{{\eta }^{2}}{2{\sigma }^{2}}\right)\frac{{g}^{2}{\left[f^{\prime}\left(x\left(\eta \right)\right)\right]}^{2}}{{\left[1-sf^{\prime}\left(x\left(\eta \right)\right)\right]}^{2}}<1,$$
(17)

where we have substituted for \(P\left(\eta \right)\) and \({\sigma }^{2}={g}^{2}\langle {f}^{2}\left(x\right)\rangle .\) We have also assumed that the function in the denominator, \(1-sf^{\prime}\left(x\left(\eta \right)\right)\), is positive, as any negative portion of the function means there is a divergent positive contribution to the integral for some \(z\) with \(Re\left[z\right]=\) \(sf^{\prime}\left(x\left(\eta \right)\right)-1>0\).

4.3 Verification of analytic methods

While we follow exactly the methods of others (Stern et al. 2014) in the infinite-\(N\) analysis, the validity of ignoring the index of units in a mean-field manner has not been established rigorously for static solutions of the dynamical system. In this subsection, we justify our method.

First we explain why, for a specific set of firing rates, each unit experiences network input that corresponds to a random sample, from a Gaussian distribution of zero mean and variance equal to \({g}^{2}\langle {f}^{2}\left(x\right)\rangle \): Each input from one presynaptic unit to one postsynaptic unit is a random sample (independent of that of other units given that connections are independent) from a Gaussian of zero mean, and the sum of such Gaussian samples (to provide the total input) is equal to a Gaussian sample from a distribution with zero mean and variance equal to the sum of variances of individual samples. Since self-connections are treated differently, total input to each unit is sampled from a slightly different distribution of \(N-1\) connections, so for small-\(N\) the variance of the Gaussian distribution used in sampling is slightly different for each unit. However, the impact of one input is negligible in the large-\(N\) limit (Fig. 10).

If one input distribution satisfies the self-consistency requirements of Eqs. 26 and is stable, then the population activity is stable in the statistical sense (as an ensemble of rates), but that does not mean that each unit has a stable fixed firing rate. While the methods of Ahmadian et al. (2015) described in the preceding subsection allow us to determine whether a fixed point of the dynamical system is stable, we must still establish the existence of such a fixed point. We use the rest of this subsection to justify our claim of multiple stable fixed points, first in the case that the rates of individual units are single-valued in their network input (\(x\) is monotonic in \(\eta \)), and to show that multiple stable fixed points are possible if the circuit has two stable distributions of \(\eta \).

  1. (1)

    In Fig. 11, we show that the self-consistent solutions for the root mean squared firing rates, \(\langle {f}^{2}\left(x\right)\rangle \) and hence the variance of the zero-mean input distribution, of stable fixed points in simulated networks approach that of the infinite-\(N\) calculations as \(N\) increases (see also (Cabana and Touboul 2018), e.g., Theorem 5, for justification).

  2. (2)

    As \(N\) increases, the phase diagrams in Fig. 4 indicating multiple stable fixed points approach the corresponding infinite-N limits in Figs. 6 and 12.

  3. (3)

    As \(N\) increases, the range of \(g\) at which we find multistability without self-connections (Fig. 3) in simulations matches the range found using the infinite-\(N\) methods (Fig. 6b, x-axis).

  4. (4)

    As a heuristic argument, for a given stable distribution of inputs, there are \(N!\) combinations for ordering the units by firing rate. For the system to have a stable fixed point, one of those \(N!\) combinations must provide inputs that are ordered in the same manner. The probability of any individual order matching the required order is \(1/N!\). Therefore, for large-\(N\) this leads to a probability of \(1-{\left(1-\frac{1}{N!}\right)}^{N!}\cong 1-{e}^{-1}\) of there being one or more stable fixed points, with the number of fixed points following a Poisson distribution of mean 1. Indeed in Fig. 2e, the number of networks with at least one stable active attractor state is not significantly different from the expected number from this argument (58 out of 100 from simulations versus 63 out of 100 expected, but note that the sampling in simulations can be an undercount). Moreover, if we count the number of distinct stable active attractor states in each network, we find the numbers 42, 40, 13, 4, 1 for 0, 1, 2, 3, or 4 states, respectively, a result which is not significantly different from a Poisson distribution with mean of 1 (\(p=0.46\), Kolmorogov–Smirnov test).

  5. (5)

    In cases where the firing rates are always single-valued in the external inputs (\(s<\frac{1}{4\Delta }\) for networks of logistic units), the stable solutions may not be fixed points, as discussed in item 4). However, for parameters whereby a finite fraction, \(\frac{M}{N}\), of units have two possible stable firing rates given their external inputs, even allowing for correlations, there become an infinite number (\(o\left({2}^{M}\right)\)) of extra state combinations that can produce the desired ordering of inputs, suggesting the probability of multistability is on the order of \(1-{e}^{-{2}^{M}}\), which approaches 1 with increasing \(N\) and \(M\).

  6. 6)

    When we use these methods to analyze networks in which units have the response function \(f\left(x\right)={\text{tanh}}(x)\), our results exactly match those of prior work, including the well-established transition to chaos at \(g=1\) if \(s=0\) (Fig. 12a).

Fig. 11
figure 11

Simulation results for networks with increasing \(N\) converge to the infinite-\(N\) analytic results. ab Data for all simulations reaching fixed points from 25 simulations of 100 networks are averaged in the first column and separated out into states in which all units have low rate (quiescent, middle column) or one or more units are active (right column). Network-size of simulations is indicated by the color of the curve, with the root mean square of firing rates, \(\sqrt{\langle {f}^{2}\left(x\right)\rangle }\), taken across units in each network. These curves are compared with \(\sigma /g\) calculated for the infinite-\(N\) network (black curves). Units have logistic response functions with a \(\Delta \boldsymbol{ }=0.2\), \(s=0\), and b \(\Delta =0.2\), \(s=0.5\)

Fig. 12
figure 12

Phase diagram for networks with tanh units. a Results with \(\Delta =1\) replicate those of (Stern et al. 2014). bf Region of multistability increases to lower \(s\) while remaining only for \(s\ge 1\) on the y-axis. (Black = chaos; cyan = quiescent only; orange = multiple active stable states.)

4.4 Multiple solutions for \(x\left(\eta \right)\)

\(x\left(\eta \right)\) Can have more than one value based on the solutions \(x-sf\left(x\right)=\eta \) for some values of \(\eta \). This requires that \(x-sf\left(x\right)\) is a non-monotonic function, which occurs if \({\text{max}}\left[f^{\prime}\left(x\right)\right]>1/s\) (to produce a region of negative slope in the function \(x-sf\left(x\right)\)). The need for a region of negative slope arises because in all cases considered here at large positive or negative values of \(x\), \({f}^{\prime}\left(x\right)=0\) and \(x-sf\left(x\right)\) has a slope of + 1. In cases of multiple solutions for \(x\left(\eta \right)\), care must be taken in the choice of \(x\left(\eta \right),\) as while stability is enhanced by choosing the solution with the lower value of \(f^{\prime}\left(x\left(\eta \right)\right),\) such a choice can lead to the lower value of \({f}^{2}\left(x\right)\) for some response functions (but not if \(f\left(x\right)={\text{tanh}}\left(x\right)\)) which can lead to the self-consistent solution for the distribution of \(\eta \) to become too narrow to support multistability, as discussed below.

In networks with the logistic response function, \(f\left(x\right)=\frac{1}{1+{\text{exp}}\left(\frac{{x}_{th}-x}{\Delta }\right)}\approx {\text{exp}}\left(\frac{{x-x}_{th}}{\Delta }\right)\) for \({x\ll x}_{th}\), is never exactly zero. Therefore, the Gaussian distribution of \(\eta \) will always have nonzero variance for \(g>0\) and, even if the distribution is narrow with very small variance, the distribution always retains some vanishingly small but nonzero density at the values of \(\eta \) required to support multiple solutions of \(x\left(\eta \right)\) if \(s>4\Delta \). However, if bifurcation points in \(x\left(\eta \right)\) require levels of the Gaussian-distributed \(\eta \) that are many standard deviations from its mean of zero, such solutions give exponentially small probability of multistability in a finite network, so are unlikely to be observed in practice. Therefore, we set a threshold, \({Z}_{{\text{max}}}\sigma \), in terms of the number, \({Z}_{{\text{max}}}\), of standard deviations, \(\sigma \), of the distribution of inputs, \(\eta \), such that if both bifurcation points, \({\eta }^{*}\), are beyond the threshold (\({\eta }^{*}<-{Z}_{{\text{max}}}\) or \({\eta }^{*}>{Z}_{{\text{max}}}\)) we ignore both the extra solutions and any instability they cause. To clarify the result of such a limit, we show results with multiple values of \({Z}_{{\text{max}}}\) in Fig. 13 (for logisitic response functions) and Fig. 14 (for tanh response functions), while using a default value of \({Z}_{{\text{max}}}=6\) in other figures. In this manner, we have used the results for an infinite system in which correlations are absent, but applied them to a system in which the number of units could range from \({10}^{3}\) to \({10}^{6}\) to \({10}^{15}\) (as \({Z}_{{\text{max}}}\) changes from 3 to 6 to 9) and the results be accurate for 999 networks in 1000 of that size. For further explanation, see also the text in Sect. 3.1.

Fig. 13
figure 13

Impact of the criterion for multistability in networks with logistic units. ad Results with \(\Delta =0.2\) with varying threshold, \({Z}_{{\text{max}}}\), for the number of standard deviations from the mean input that a unit must receive before considering a unit in the network to switch state. The mathematical limit is shown in (d), while (a)–(c) indicate the multistable region growing with increased \({Z}_{{\text{max}}}.\) Note that in all cases the system with \(g=0\) on the y-axis cannot be multistable for \(s<1\). eh Results for \(\Delta =0.1\). (Black = chaos; dark blue = chaos + quiescent stable; cyan = quiescent only; yellow = quiescent + active stable state; orange = multiple active stable states; red = stable quiescent + multiple active stables states; crimson = chaos + multiple stable active states.)

Fig. 14
figure 14

Impact of the criterion for multistability in networks with tanh units. ad Results with \(\Delta =0.8\) with varying threshold, \({Z}_{{\text{max}}}\), for the number of standard deviations from the mean input that a unit must receive before considering a unit in the network to switch state. The mathematical limit is shown in (d), which in this case is minimally different from the results with lower \({Z}_{{\text{max}}}\) in (a–c). Note that in all cases the system with \(g=0\) on the y-axis cannot be multistable for \(s<1\). eh Equivalent results for \(\Delta =0.4\), with a tiny, but observable dependence on \({Z}_{{\text{max}}}\). (Black = chaos; cyan = quiescent only; orange = multiple active stable states.)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Breffle, J., Mokashe, S., Qiu, S. et al. Multistability in neural systems with random cross-connections. Biol Cybern 117, 485–506 (2023). https://doi.org/10.1007/s00422-023-00981-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00422-023-00981-w

Keywords

Navigation