Microstate

Thus, the microstate of the system can be toggled between a uniformly magnetized lattice, versus a lattice where one set of rhomboids is reversed with respect to the partner set.

From: Solid State Physics , 2019

Principal Paradoxes of Classical Statistical Physics

Oleg Kupervasser , in Application of New Cybernetics in Physics, 2017

2.1.4 Reversibility and Poincare's Theorem

Microstate evolution is reversible. For each trajectory in phase space, there is the inverse trajectory that can be obtained through the inversion of all the velocities of the molecules into their opposite values. This inversion is equivalent to the reverse playback of a movie of the process.

After some time (likely a very long time), almost any trajectory should return close to its initial microstate. This statement is termed Poincare's theorem about returns [98]. Most real systems are chaotic and unstable, and phase trajectories from previously neighboring microstates are rapidly divergent. Therefore, for these systems, the return times are unequal, even for previously neighboring microstates. The return time strongly depends on the exact position of the initial trajectory point in the mesh into which the phase space is divided. However, for a very small class of systems termed integrable systems, this return time is approximately equal for all the initial points in the phase mesh. These returns occur periodically or almost ­periodically (see Appendix A.3).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128128015000024

The Geometric Variational Framework for Entropy in General Relativity

L. Fatibene , ... M. Raiteri , in Variational and Extremum Principles in Macroscopic Systems, 2005

2 The geometry of entropy in General Relativity

Motivated by microstate-counting methods, the Taub-bolt example and statistical approaches, it was perceived that a geometrical definition of entropy in General Relativity might have lost its initial central role: the identification between entropy and horizon area was still considered important for (physically reasonable) stationary black holes but in more general situations the geometric character of entropy was more difficult to be recognized. Iyer and Wald [15] have recently proposed a prescription intrinsically and deeply geometrical (though based on strong and unnecessary hypotheses that partially hide the geometrical content of the prescription); but nonstationary black holes seemed to definitely behave in a nongeometrical way. In the last few years we have generalized Wald's prescription, revealing its full geometrical nature, relaxing the unnecessary hypotheses and showing that all cases examined above can be easily handled in this way [16–19]. This shows that geometry is present also when it is not easily recognized and it provides a unifying viewpoint.

The geometrization of entropy in General Relativity can be understood only after the problem has been split into two parts, one being deeply geometrical provided one gives up control on the other part. This is not a peculiarity of General Relativity. In any macroscopical approach to entropy, e.g. when dealing with classical thermodynamics of gases, one has to predict thermodynamical potentials (e.g. the temperature). It can be easily argued that no classical way of computing the temperature in a macroscopic thermodynamics exists unless all other thermodynamical potentials are known. Similarly, the best way to predict temperature in General Relativity is Hawking radiation, which is in turn based on a semiclassical approximation of quantum mechanics on a curved background or some equivalent classical result about geodesics geometry. The further identifications between the temperature and other geometrical entities (surface gravity of the horizon, the period of time compactification in the Euclidean sector to avoid conical singularities and so on) are necessarily less fundamental than Hawking radiation and they basically represent a coincidence. It is worthwhile to mention that such a coincidence is deeply related to the no-hair theorem: if the status of a black hole is described by very few parameters that in turn determine its geometrical features, then it is trivial that any status function is necessarily a function of these few parameters (or, equivalently, it is a function of the geometrical parameters of the black hole). If one considers a one-parameter family of black-hole solutions (e.g. Schwarzschild solutions) such that the horizon area can be considered as a parameter for that family, than any physical quantity associated with those black holes can be trivially expressed as a function of the area of the horizon.

When the thermodynamical potentials are provided by some other physical equations, then one can write down the first law of thermodynamics that relates all the relevant quantities; the usual form of this fundamental principle is

(2) δ m = T δ S + Ω δ J + b δ q + ,

where m is a measure of the energetic content of the gravitational system; T, Ω , b are the temperature, the angular velocity of the horizon and an electromagnetic thermodynamical potential, respectively. The other quantities S, J and q are the entropy, the angular momentum and the electric charge. Other terms can of course appear depending on how liberal we decide to be in allowing other physical quantities (e.g. when more general gauge fields are present). The deformation operator δ denotes any infinitesimal variation of parameters in the space of solutions, e.g. along a (restricted) family of arbitrarily fixed solutions (as it happens, e.g. along Kerr–Newman solutions).

The main content of the Wald prescription was to use the first law of thermodynamics to define S whenever all the other quantities are already known. We note that δm, δJ, δq, … can be defined via the Nöther theorem (or other variational techniques) as boundary-asymptotic quantities. Hence the stronger the tools we have to compute conservation laws the more general situations we are able to deal with. In this view variational calculus has a prominent role being the natural setting for studying conservation laws in a global and covariant way and to guide us towards their physical interpretation.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780080444888500096

Computer simulations

R.K. Pathria , Paul D. Beale , in Statistical Mechanics (Fourth Edition), 2022

16.1 Introduction and statistics

While certain critical aspects of computer simulation theory should be followed rigorously, much of computer simulation development and use is an art form. There are many possible simulation approaches for any given problem and some choices will be more effective at elucidating important physical properties than others. This brief chapter concentrates on equilibrium simulations but computer simulations are also widely used to model dynamical and nonequilibrium processes. The task of determining equilibrium thermodynamic averages of model systems is accomplished by generating a sequence of microstates that are chosen from the equilibrium ensemble of the model. For example, an MD simulation might be used to integrate Newton's equations of motion for generating a time series of states in phase space as the system explores the constant-energy hypersurface of the Hamiltonian. By comparison, an MC simulation of the same model might generate a sequence of states chosen by a random walk among the configurational microstates of the canonical ensemble. Both methods are examples of importance sampling, which focuses computational effort on generating microstates that are representative of the equilibrium ensemble rather than sampling all of the phase space. It is this huge improvement in efficiency that makes computer simulations of statistical mechanical models feasible. The sequence of states produced by either method can be used to estimate equilibrium averages. Allen and Tildesley (2017), Binder and Heermann (2002), Frenkel and Smit (2002), and Landau and Binder (2009) provide more detailed discussions of computer simulations and their applications in statistical physics.

Let q represent a microstate of the system and A ( q ) a thermodynamic observable that is a function of the microstate. In an MC simulation, q might represent the positions of all the particles in the system while in an MD simulation q might represent the positions and momenta of all the particles. The observable A ( q ) might represent the potential energy, virial contribution to the pressure, pair correlation function, and so on. The initial microstate chosen to start a simulation will generally not be typical of the set of microstates that make up the equilibrium ensemble, but the goal of a simulation is to evolve the microstate through a large enough subset of the microstates of the equilibrium ensemble so that averages of observables approach their equilibrium values. After a simulation has run long enough for the system to approach equilibrium, the simulation then generates a sequence of M configurations, { q j } j = 1 M , chosen from the set of microstates in the equilibrium ensemble, and stores a sequence of values, { A ( q j ) } j = 1 M , for each of the thermodynamic variables one wants to measure. 1 Since the microstates are chosen from the equilibrium ensemble, the equilibrium average of A is approximated by a simple average of the set of values { A ( q j ) } j = 1 M . Of course, a simulation can only provide a finite sequence of states, so a statistical analysis of the uncertainty of the results is a crucial part of any simulation.

The equilibrium average of the variable A is given by

(1) A = A M ± σ M ,

where the simulation average A M and uncertainty σ M are determined by

(2a) A M = 1 M j = 1 M A ( q j ) ,

(2b) σ M = A 2 M A M 2 M / ( 2 τ + 1 ) ,

(2c) A 2 M A M 2 = 1 M j = 1 M [ A ( q j ) A M ] 2 .

The "correlation time" τ is defined as follows. Since the states q j are generated sequentially by the simulation, each new state q j + 1 is guaranteed to be close to the previous state q j , so the values A ( q j ) in the sequence are highly correlated. The correlations in the values of A ( q j ) decrease with the "correlation time" τ, which can be calculated from the correlation function ϕ A A ( t ) , namely,

(3a) ϕ A A ( t ) = A ( t ) A ( 0 ) A ( t ) A ( 0 ) A 2 A 2 ,

(3b) τ = t > 0 ϕ A A ( t ) .

The variable t is a measure of the separation between pairs of configurations in the ordered sequence. In the case of MD simulations, τ represents a physical time for the system to move far enough along its trajectory on the energy surface to result in decorrelated values of A. MC simulations explore equilibrium microstates in a random walk, so τ does not correspond to physical time but rather the average number of MC sweeps needed to give statistically independent values for A. The quantity M / ( 2 τ + 1 ) represents the number of statistically independent configurations in the sequence of M values. 2

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780081026922000259

Mathematical Statistical Physics

Christian Maes , in Les Houches, 2006

5.2 Distributions

We are mostly uncertain about the exact micro-state of the system. That is so when preparing the system and also later when we observe the system. Even when we know the reduced state, we still need to evaluate the plausibility of background configurations in order to predict the future development on the level of the reduced states. A natural choice here is to use the microcanonical ensemble. That is, we sample the reduced variables according to some probability distribution v ˆ on Γ ˆ and we impose the microcanonical distribution on each phase cell M. If v ˆ is a probability on Γ ˆ , then r ( v ˆ ) ( x ) v ˆ ( M ( x ) ) / | M ( x ) | is the probability density on Γ obtained from v ˆ by uniform randomization (microcanonical ensemble) inside each M Γ ˆ . In words, the probability of a micro-state x is the probability (under v ˆ ) of its corresponding reduced state Mx multiplied with the a priori probability (under the Liouville measure) of x given the reduced state Mx. So if we take v ˆ = δ ( M ) concentrated on the reduced state M Γ ˆ , then r ( v ˆ ) is the initial probability density corresponding to an experiment where the system is started in equilibrium subject to constraints; that is a uniform (i.e., microcanonical) distribution of the phase points over the set M.

For the opposite direction, we note that every density ν on Γ gives rise to its projection p(ν), a probability on Γ ˆ , via

p ( v ) ( M ) v ( M ) = d x v ( x ) δ ( M ( x ) M )

and obviously, p ( r ( v ˆ ) ) = v ˆ .

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0924809906800508

COMPENDIUM OF THE FOUNDATIONS OF CLASSICAL STATISTICAL PHYSICS

Jos Uffink , in Philosophy of Physics, 2007

5.2 Units, zeros and the factor N!

The various expressions Gibbs proposed as analogies for entropy, i.e. (77,81,84), were presented without any discussion of 'units and zeros', i.e. of their physical dimension and the constants that may be added to these expressions. This was only natural because Gibbs singled out those expressions for their formal merit of reproducing the fundamental equation, in which only the combination TdS appears. He discussed the question of the physical dimension of entropy by noting that the fundamental equation remains invariant if we multiply the analogue for temperature — i.e. the parameter θ in the canonical case, or the functions (80 or (83) for the microcanonical case — by some constant K and the corresponding analogues for entropy — (77), (81) and (84) — by 1/K. Applied to the simple case of the monatomic ideal gas of N molecules, he concluded that, in order to equate the analogues of temperature to the ideal gas temperature, 1/K should be set equal to

(85) 1 K = 2 3 c V N ,

where cV is the specific heat at constant volume. He notes that "this value had been recognized by physicists as a constant independent of the kind of monatomic gas considered" [Gibbs, 1902, p. 185]. Indeed, in modern notation, 1/K = k, i.e. Boltzmann's constant.

Concerning the question of 'zeros', Gibbs noted that all the expressions proposed as analogy of entropy had the dimension of the logarithm of phase space volume and are thus affected by the choice of our units for length mass and time in the form of some additional constant (cf. [Gibbs, 1902, p. 19,183]). But even if some choice for such units is fixed, further constants could be added to the statistical analogs of entropy, i.e. arbitrary expressions that may depend on anything not varied in the fundamental equation. However, their values would disappear when differences of entropy are compared. And since only entropy differences have physical meaning, a question of determining these constants would thus appear to be immaterial. However, Gibbs went on to argue that "the principle that the entropy of any body has an arbitrary additive constant is subject to limitations when different quantities of the same substance are compared"[Gibbs, 1902, p. 206]. He formulated further conditions on how the additive constant may depend on the number N of particles in his final chapter.

Gibbs starts this investigation by raising the following problem. Consider the phase (i.e. microstate) ( q 1 , p 1 ; ; q N , p N ) of an N-particle system where the particles are said to be "indistinguishable", "entirely similar" or "perfectly similar". 53 Now, if we perform a permutation on the particles of such a system, should we regard the result as a different phase or not? Gibbs first argues that it "seems in accordance with the spirit of the statistical method" to regard such phases as the same. It might be urged, he says, that for such particles no identity is possible except that of qualities, and when comparing the permuted and unpermuted system, "nothing remains on which to base the identification of any particular particle of the first system with any particular particle of the second" [Gibbs, 1902, p. 187].

However, he immediately rejects this argument, stating that all this would be true for systems with "simultaneous objective existence", but hardly applies to the "creations of the imagination". On the contrary, Gibbs argues:

"The perfect similarity of several particles of a system will not in the least interfere with the identification of a particular particle in one case and with a particular particle in another. The question is one to be decided in accordance with the requirements of practical convenience in the discussion of the problems with which we are engaged" [Gibbs, 1902, p. 188]

He continues therefore by exploring both options, calling the viewpoint in which permuted phases are regarded as identical the generic phase, and that in which they are seen as distinct the specific phase. In modern terms the generic phase space is obtained as the quotient space of the specific phase space obtained by identifying all phase points that differ by a permutation (see [Leinaas and Myrheim, 1977]). In general, there are N! different permutations on the phase of a system of N particles, 54 and there are thus N! different specific phases corresponding to one generic phase. This reduces the generic phase space measure by an overall factor of 1/N! in comparison to the specific phase space. Since the analogies to entropy all have a dimension equal to the logarithm of phase space measure, this factor shows up as an further additive constant to the entropy, namely — ln N! in comparison to an entropy calculated from the specific phase. Gibbs concludes that when N is constant, "it is therefore immaterial whether we use [the generic entropy] or [the specific entropy], since this only affects the arbitrary constant of integration which is added to the entropy [Gibbs, 1902, p. 206]. 55

However, Gibbs points out that this is not the case if we compare the entropies of systems with different number of particles. For example, consider two identical gases, each with the same energy U, volume V and number of particles N, in contiguous containers, and let the entropy of each gas be written as S (U, V, N). Gibbs puts the entropy of the total system equal to the sum of the entropies:

(86) S tot = 2 S ( U , V , N ) .

Now suppose a valve is opened, making a connection between the two containers. Gibbs says that "we do not regard this as making any change in the entropy, although the gases diffuse into one another, and this process would increase the entropy if the gases were different" [Gibbs, 1902, p. 206-7]. Therefore, the entropy in this new situation is

(87) S tot = S tot .

But the new system, is a gas with energy 2U, volume 2V, and particle number 2N. Therefore, we obtain:

(88) S tot = S ( 2 U , 2 V , 2 N ) = 2 S ( U , V , N ) ,

where the right-hand side equation expresses the extensivity of entropy. This condition is satisfied (at least for large N) by the generic entropy but not by the specific entropy. Gibbs concludes "it is evident therefore that it is equilibrium with respect to generic phases, and not that with respect to specific, with which we have to do in the evaluation of entropy, … except in the thermodynamics of bodies in which the number of molecules of the various kinds is constant" [Gibbs, 1902, p. 207].

The issue expressed in these final pages is perhaps the most controversial in Gibbs' book; at least it has generated much further discussion. Many later authors have argued that the insertion of a factor 1/N! in the phase space measure is obligatory to obtain "correct" results and, ultimately due to a lack of any metaphysical identity or "haecceity" of the perfectly similar particles considered. Some have even gone on to argue that quantum mechanics is needed to explain this. For example, [Huang, 1987, p. 154] writes "It is not possible to understand classically why we must divide […] by N! to obtain the correct counting of states. The reason is inherently quantum mechanical …". However, many others deny this [Becker, 1967; van Kampen, 1984; Ray, 1984]. It would take me too far afield to discuss the various views and widespread confusion on this issue.

Let it suffice to note that Gibbs rejected arguments from the metaphysics of identity for the creations of the imagination. (I presume this may be taken to express that the phases of an N-particles system are theoretical constructs, rather than material objects.) Further, Gibbs did not claim that the generic view was correct and the specific view of incorrect; he preferred to settle the question by "practical convenience". There are indeed several aspects of his argument that rely on assumptions that may be argued to be conventional. for example the 'additivity' demand (86) could be expanded to read more fully:

(89) S tot ( U 1 , V 1 , N 1 ; U 2 , V 2 , N 2 ) + K tot = S 1 ( U 1 , V 1 , N 1 ) + K 1 + S 2 ( U 2 , V 2 , N 2 ) + K 2 ,

Applied to the special case where S 1 and S 2 are identical functions taken at the same values of their arguments. The point to note here is that this relation only leads to (86) if we also employ the conventions K tot = K 1 + K 2 and K 1 = K 2. Also, his cautious choice of words concerning (87) — "we do not regard this as making any change" — suggest that he wants to leave open whether this equation expresses a fact or a conventional choice on our part. But by and large, it seems fair to say that Gibbs' criterion for practical convenience is simply the recovery of the properties usually assumed to hold for thermodynamic entropy.

As a final remark, note that the contrast mentioned here in passing by Gibbs, i.e. that in thermodynamics the mixing of identical gases, by allowing them to diffuse into one another, does not change the entropy, whereas this process does increase entropy if the gases are different, implicitly refers to an earlier discussion of this issue in his 1875 paper [Gibbs, 1906, pp. 165–167]. The contrast between the entropy of mixing of identical fluids and that of different fluids noted on that occasion is now commonly known as the Gibbs paradox. (More precisely, this 'paradox' is that the entropy of mixing different fluids is a constant (kT ln 2 in the above case) as long as the substances are different, and vanishes abruptly when they are perfectly similar; thus negating the intuitive expectation one might have had that the entropy of mixing should diminish gradually when the substances become more and more alike). Now note that in the the specific view, mixing different substances and mixing identical substances both lead to an entropy increase: in that view there is no Gibbs paradox, since there is no abrupt change when the substances become more and more alike. On the other hand, the adoption of the generic view, i.e. the division of the phase space measure by N!, is used by Gibbs to recover the usual properties of thermodynamic entropy including the Gibbs paradox — the discontinuity between mixing of different and identical gases.

Still, many authors seem to believe that the division by N! is a procedure that solves the Gibbs paradox. But this is clearly not the case; instead, it is the specific viewpoint that avoids the paradox, while the generic viewpoint recovers the Gibbs paradox for the statistical mechanical analogies to entropy. The irony of it all is that, in statistical mechanics, the term "Gibbs paradox" is sometimes used to mean or imply the absence of the original Gibbs paradox in the specific point of view, so that a resolution of this "Gibbs paradox" requires the return of the original paradox.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444515605500129

An Introduction to Dynamics of Colloids

In Studies in Interface Science, 1996

Conditional pdf's

Consider again the photograph of the ensemble discussed earlier, which allows for the determination of the microstate of each of the systems in the ensemble. Now consider only those systems which at a certain earlier time t 0 < t were in a particular microstate X 0. This subset of systems in the ensemble is an ensemble itself, and pdf's may be defined as above for this new ensemble. This new ensemble is an ensemble of systems which are prepared in microstate X 0 at time t 0. The pdf's for X are pdf's with the constraint that at an earlier time t 0 the system was in the microstate X 0. Such pdf's are called conditional pdf's, and are denoted as P(X, t|X 0, t 0). Hence,

(1.41) P ( X , t | X 0 , t 0 ) d X = t h e p r o b a b i l i t y t h a t p o s i t i o n s a n d m o m e n t a a r e i n ( X , X + d X ) a t t i m e t , g i v e n t h a t t h e i r v a l u e s w e r e X 0 a t t i m e t 0 < t .

Similarly, conditional pdf's of phase functions f, given that the phase function had a particular value f 0 at an earlier time may be defined as,

(1.42) P ( f , t | f 0 , t 0 ) d f = t h e p r o b a b i l i t y t h a t t h e p h a s e f u n C t i o n i s i n ( f , f + d f ) a t t i m e t , g i v e n t h a t i t s v a l u e s w a s f 0 a t t i m e t 0 < t .

By definition, the connection between conditional pdf's and the earlier discussed pdf's (sometimes referred to as unconditional pdf's) reads,

(1.43) P ( X , t | X 0 , t 0 ) = P ( X , t , X 0 , t 0 ) P ( X 0 , t 0 ) ,

and similarly for pdf's of phase functions. The conditional ensemble average of a phase function f, given that f = f 0 at some earlier time t 0, is denoted as < f > f0,

(1.44) < f > f 0 = d f P ( f , t | f 0 , t 0 ) f .

This ensemble average is in general a function of the time t. The phase function evolves in time for each system in the ensemble differently, since there are many different microstates X 0 that satisfy f 0 = f(X 0). Two such different realizations are depicted in fig. 1.9. The conditional ensemble average is the average of all those possible realizations.

Figure 1.9. Two possible realizations of the time evolution of the phase function f, given that at time t 0 the phase function had a particular value f 0. The smooth curve is the conditional ensemble average &lt; f &gt; f0.

One can of course define time independent conditional pdf's. For example, one may ask for the probability that particles 3,4, · · ·, N have positions r 3, r 4, · · ·, r N , given that particles 1 and 2 have fixed positions r 1 and r 2, respectively. That conditional pdf is, in analogy with eq.(1.43), equal to,

(1.45) P ( r 3 , , r N | r 1 , r 2 ) = P ( r 1 , , r N ) P 2 ( r 1 , r 2 ) ,

where P 2(r 1, r 2) is the pdf for (r 1, r 2), which pdf will be discussed in more detail later.

To determine an ensemble average experimentally, there is no need to actually construct a collection of many macroscopically identical systems. When an experiment on a single system is repeated independently many times, the average of the outcome of these experiments is the ensemble average. In many cases only a single experiment is already sufficient to obtain the ensemble average. When the system is so large that the quantity of interest has many independent realizations within different parts of the system, an ensemble average is measured in a single experiment that probes a large volume within the system.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S1383730396800032

Prologue

Phil Attard , in Thermodynamics and Statistical Mechanics, 2002

1.3.1 States

A system possesses a fundamental set of states called microstates that are distinct and indivisible. Distinct means that each microstate bears a unique label, and indivisible means that no finer subdivision of the system is possible. These discrete states are ultimately quantum in nature, but one may pass to the classical continuum limit, in which case the archetypal microstate could be a position-momentum cell of fixed volume in phase space. Here the theory will initially be developed in a general and abstract way for the discrete case, as a precursor to the continuum results of classical statistical mechanics.

The macrostates of the system are disjoint, distinct sets of microstates. In general they correspond to the value of some physical observable, such as the energy or density of some part of the system, and they are labelled by this observable. Disjoint means that different macrostates have no microstates in common, and distinct means that no two macrostates have the same value of the observable. In addition to macrostates, there may exist states that are sets of microstates but which are not disjoint or are not distinct.

The microstate that the system is in varies over time due to either deterministic or stochastic transitions. In consequence, transitions also occur between the macrostates of the system. The set of all states that may be reached by a finite sequence of transitions defines the possible states of the system. Hence in time the system will follow a trajectory that eventually passes through all the possible states of the system. A transition rule for the microstates may be reversible (i.e., symmetric between the forward and the reverse transitions), but will yield statistically irreversible behaviour in the macrostates over finite times.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780120663217500015

Recent Advances in Topological Ferroics and their Dynamics

Joseph Sklenar , ... M. Benjamin Jungfleisch , in Solid State Physics, 2019

2.2.2 Vertex frustration

Vertex-frustrated ASI systems have also become a more recent system studied with PEEM-XMCD. By imaging the microstate of the Shatki lattice at temperatures just above the blocking temperature, low-energy thermal fluctuations within the lattice were carefully studied. In the square lattice, excitations above the ground state often are monopole pairs, which are two charged Type 3 vertices in a Type 1 background. Because the Shatki lattice has a mixed vertex coordination number, excitations cannot be well described with effective magnetic charges on individual vertices. Thus, a picture of quasiparticle excitations in vertex-frustrated ASI was not well-established prior to PEEM-XMCD studies. Using PEEM-XMCD, Lao et al. showed that the Shatki lattice can be mapped onto an emergent dimer-cover model [154]. The dimer-cover model allows for experimentally imaged spin maps to be converted into an emergent vector field. This vector field can be used to define topologically protected charges corresponding to long lifetime quasiparticles within the Shatki lattice. It was concluded that the topological charges, which represent excitations above the Shatki's ground state, play an important role in limiting thermal equilibration of vertex-frustrated ASI systems because the movement of the charges is kinetically constrained at low temperature. This is in contrast to the movement of charges in the conventional square ASI, where single spin flips lead to both charge-pair creation and propagation. In another PEEM-XMCD experiment, Stopfel et al. studied thermalization effects between a modified Shatki lattice, where the vertices with two-islands coordination are replaced by a single long island roughly twice the length of a single island, and the original Shatki lattice [155]. A central focus of this work was to study ground state formation when there are two different blocking temperatures within the Shatki lattice. This is possible because the effective blocking temperature of a long magnetic island is significantly higher (nearly 30 K) than the short islands.

While the Shatki lattice has been the most extensively studied vertex-frustrated ASI with PEEM-XMCD, other lattices have been considered as well. The Tetris lattice is one of the original vertex-frustrated lattices identified by Morrison et al. [43]. Similar to the Shatki lattice, the Tetris lattice is comprised of vertices with four-, three-, and two- island coordination numbers. Unlike the Shatki lattice, the Tetris lattice has two types of two-island vertices where the islands are either in-line with each other or are arranged to make a right angle with respect to one another. When studying slow spin dynamics of the Tetris lattice, Gilbert et al. noted that the lattice can be decomposed into one set of diagonal stripes consisting of four-island and two-island vertices, and another set consisting of three-island and two-island vertices [156]. The stripes consisting of four-island and two-island vertices are able to thermalize into ordered ground states while the stripes consisting of three-island and two-island vertices remain disordered and behave as one-dimensional Ising spin chains. Therefore, although the Tetris lattice is a two-dimensional tiling, the dimensionality of the system was effectively reduced along the effective one-dimensional Ising spin chains.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0081194719300062

Extreme Energy Dissipation

Adam Moroz , in The Common Extremalities in Biology and Physics (Second Edition), 2012

1.1.12 Microparameters: Statistical Interpretation of Free Energy and Entropy

Statistical mechanics offers another possible interpretation of entropy, associated directly with the distribution of the probability of finding the system in the microstate at the realization of a particular macrostate. This leads to the following definition of entropy:

(1.16) S = i = 1 N p i ln p i ,

where p i is the probability of finding the system in microstate i.

However, this microstate can be characterized by a certain energy: The probability of p i microstates are associated with this energy in a certain way. This dependence was first obtained by Gibbs in 1901 and is called the Gibbs distribution (see, for example, Ref. [6]) or the canonical distribution:

(1.17) p i = C exp ( E i T ) ,

where C is a normalization constant, and

(1.18) 1 C = i = 1 N exp ( E i T ) .

It can be found that free energy is linked to the Gibbs distribution. Substituting in the formula for the distribution of entropy, we then obtain

S = ln C + 1 T i = 1 N E i exp ( E i T ) = ln C + E ¯ T

or

(1.19) ln C = E ¯ T S T .

The average energy E ¯ can be defined as the internal energy E, so

(1.20) E T S F ,

where F is Helmholtz free energy (see, Eq. (1.7)). Then,

(1.21) F = T ln C = T ln i = 1 N exp ( E i T ) ,

where C is the normalization constant in the Gibbs distribution.

Therefore, free energy can be expressed as a measure related to the deviation of the actual energy distribution from the most natural; in some sense, optimal for these macroconditions. There are a number of other definitions of entropy. The most well known is the Tsallis entropy [7], which is a generalization of Boltzmann–Gibbs entropy. However, all of these are based on the accountability of probabilities of microstates in a two-leveled thermodynamic model: the microstate and the macrostate. In actual fact, the hierarchy of biological systems is much more complex.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123851871000010

Phase transitions: exact (or almost exact) results for various models

R.K. Pathria , Paul D. Beale , in Statistical Mechanics (Fourth Edition), 2022

13.4.A The two-dimensional Ising model on a finite lattice

Phase transitions, viewed as critical phenomena, cannot occur in a finite system since a statistical mechanical model with a finite number of degrees of freedom cannot have a nonanalytic partition function or free energy. Criticality occurs only in the thermodynamic limit. Since real physical systems are of finite size, the manner in which finite-size effects manifest themselves as the correlation length ξ approaches the system size is of considerable importance in understanding how critical singularities get rounded off in real systems. In this regard, the two-dimensional nearest-neighbor Ising model on a square lattice in zero field can be solved on a finite square lattice with periodic boundary conditions (Kaufman, 1949), which allows for a detailed exploration of finite-size effects, especially near the bulk critical point; see Ferdinand and Fisher (1969). Kaufman's solution is based on a determination of all the eigenvalues of the transfer matrix.

Onsager (1944) only required the largest eigenvalue since his solution was based on a strip geometry with the length of one side taken to infinity. We here consider the Ising model on a lattice with n rows and m columns with periodic boundary conditions; see Figure 13.16. Each column of n spins has 2 n possible configurations, so the transfer matrix P that couples nearest-neighbor columns is a 2 n × 2 n matrix of Boltzmann factors with eigenvalues λ α , with α = 1 , 2 , , 2 n . Just as in the case of the one-dimensional Ising model studied in Section 13.2, the partition function of a system with n rows and m columns can be written as the trace of a transfer matrix P :

Figure 13.16

Figure 13.16. A finite square lattice with n = 4 rows and m = 6 columns. In view of the periodic boundary conditions, sites on the leftmost column interact with sites on the rightmost column and the bottom row interacts with the top row.

(46) Q n , m ( K ) = Trace ( P m ) = α = 1 2 n λ α m ,

where the eigenvalues of the transfer matrix fall into two classes:

(47) λ α = { ( 2 sinh ( 2 K ) ) n / 2 exp ( 1 2 ( ± γ 0 ± γ 2 ± . . . ± γ 2 n 2 ) ) , ( 2 sinh ( 2 K ) ) n / 2 exp ( 1 2 ( ± γ 1 ± γ 3 ± . . . ± γ 2 n 1 ) ) .

The quantity γ q for 0 < q < 2 n is the positive root of the equation

(48) cosh ( γ q ) = cosh 2 ( 2 K ) sinh ( 2 K ) cos ( π q n ) ,

while the q = 0 case is given by

(49) e γ 0 = e 2 K tanh ( K ) .

Only terms with an even number of minus signs inside the exponentials appear in the sums in equation (47), so the partition function can be written as

(50) Q n , m ( K ) = 1 2 ( 2 sinh ( 2 K ) ) n m / 2 ( Y 1 + Y 2 + Y 3 + Y 4 ) ,

where

(51a) Y 1 = q = 0 n 1 ( 2 cosh ( m 2 γ 2 q + 1 ) ) ,

(51b) Y 2 = q = 0 n 1 ( 2 sinh ( m 2 γ 2 q + 1 ) ) ,

(51c) Y 3 = q = 0 n 1 ( 2 cosh ( m 2 γ 2 q ) ) ,

(51d) Y 4 = q = 0 n 1 ( 2 sinh ( m 2 γ 2 q ) ) ;

see Kaufman (1949). This form of the partition function allows for an exact calculation of the free energy, internal energy, and specific heat on finite lattices; see Figure 13.17. The logarithmic singularity in the specific heat at the bulk critical point evolves from a specific heat peak that grows logarithmically with the system size, that is, C n m ( K c ) / n m k ( 8 K c 2 / π ) ln ( n ) 0.4945 ln ( n ) ; see Ferdinand and Fisher (1969). Note also that the coefficient of ln ( n ) here is the same as the coefficient of the ln ( | 1 T / T c | ) term in the bulk specific heat, as given in equation (37).

Figure 13.17

Figure 13.17. Specific heat of the two-dimensional Ising model for finite 2   ×   2, 4   ×   4,…,   64   ×   64 lattices. The specific heat is analytic for all finite lattices. The maximum value of the specific heat grows proportional to the logarithm of the linear dimension of the lattice and the location of the maximum approaches the bulk critical temperature (denoted by the vertical line) proportional to the inverse of the linear dimension of the lattice. From Ferdinand and Fisher (1969). Reprinted with permission; copyright .

Copyright © 1969 American Physical Society

The low-temperature series expansion for the partition function can be written as Q n , m ( K ) = e 2 n m K Q ˜ n , m ( K ) , where

(52) Q ˜ n , m ( K ) = q = 0 n m g q x 2 q ,

x = e 2 K is the Boltzmann factor for a single excitation, and the coefficients g q denote the number of configurations with energy 4 q J above the ground state. The sum of the coefficients counts all the microstates in the system; therefore

lim K 0 Q ˜ n , m ( K ) = q = 0 n m g q = 2 n m .

The coefficients g q represent the number of microstates pertaining to energy ( 2 n m J + 4 q J ) , with the corresponding entropy being k ln g q .

The first term in the series is g 0 = 2 since there are two degenerate ground states, namely, all spins up or all spins down. It is straightforward to see that only even orders in x appear in this expansion. Examples of the low-order graphs that contribute to the series are shown in Figure 13.18. The first few terms in the series are

Figure 13.18

Figure 13.18. The lowest few excited states of the lattice. (a) The q = 2 states have a single down spin in a sea of up spins or a single up spin in a sea of down spins; these states have energy 8J above the ground state and there are g 2 = 2nm configurations. (b) The q = 3 states have a pair of down spins in a sea of up spins, or vice versa; these states have energy 12J above the ground state and g 3 = 4nm configurations. (c) and (d) The q = 4 states can have a single grouping of opposite spins or a pair of isolated flipped spins; these states have energy 16J above the ground state and the total number of configurations g 4 = (nm)2 + 9nm.

(53) Q ˜ n , m ( K ) = 2 + ( 2 n m ) x 4 + ( 4 n m ) x 6 + ( ( n m ) 2 + 9 n m ) x 8 + ( 4 ( n m ) 2 + 24 n m ) x 10 + .

If both n and m are even, the model's ferromagnetic/antiferromagnetic symmetry ( J J and s i s i on one sublattice) gives g q = g n m q . Due to the self-duality of the two-dimensional square lattice, exactly the same coefficients g q also appear in the high-temperature series expansion, where the expansion variable is tanh K .

The probability P q of finding an equilibrium state with energy 4 q J above the ground state is given by

(54) P q = g q x 2 q Q ˜ n , m ( K ) ,

and the internal energy and the heat capacity per spin are given by

(55a) U N J = 2 + 4 N q = 0 N q P q ( N = n m ) ,

(55b) C N k = 16 N ( J k T ) 2 ( q = 0 N q 2 P q ( q = 0 N q P q ) 2 ) .

One can cast Kaufman's solution, equations (50) and (51), in the form of a low-temperature expansion of the form shown in (52), thereby giving an exact determination of the partition function and the equilibrium energy distribution; see Beale (1996). The low-temperature series (52) can be written as

(56) Q ˜ n , m ( K ) = q = 0 n m g q x 2 q = ( Z 1 + Z 2 + Z 3 + Z 4 ) ,

where if n is even, then

(57a) Z 1 = 1 2 q = 0 n / 2 1 c 2 q + 1 2 ,

(57b) Z 2 = 1 2 q = 0 n / 2 1 s 2 q + 1 2 ,

(57c) Z 3 = 1 2 c 0 c n q = 1 n / 2 1 c 2 q 2 ,

(57d) Z 4 = 1 2 s 0 s n q = 1 n / 2 1 s 2 q 2 ;

while if n is odd, then

(58a) Z 1 = 1 2 c n q = 0 ( n 3 ) / 2 c 2 q + 1 2 ,

(58b) Z 2 = 1 2 s n q = 0 ( n 3 ) / 2 s 2 q + 1 2 ,

(58c) Z 3 = 1 2 c 0 q = 1 ( n 1 ) / 2 c 2 q 2 ,

(58d) Z 4 = 1 2 s 0 q = 1 ( n 1 ) / 2 s 2 q 2 .

The factors in equations (57) and (58) are

(59a) c 0 = ( 1 x ) m + ( x ( 1 + x ) ) m ,

(59b) s 0 = ( 1 x ) m ( x ( 1 + x ) ) m ,

(59c) c n = ( 1 + x ) m + ( x ( 1 x ) ) m ,

(59d) s n = ( 1 + x ) m ( x ( 1 x ) ) m ,

(59e) c q 2 = 1 2 m 1 ( ( j = 0 m 2 m ! ( α q 2 β 2 ) j α q m 2 j ( 2 j ) ! ( m 2 j ) ! ) + β m ) ,

(59f) s q 2 = 1 2 m 1 ( ( j = 0 m 2 m ! ( α q 2 β 2 ) j α q m 2 j ( 2 j ) ! ( m 2 j ) ! ) β m ) ,

(59g) β = 2 x ( 1 x 2 ) ,

(59h) α q = ( 1 + x 2 ) 2 β cos ( π q n ) .

The function z denotes the largest integer less than or equal to z. The quantities c q 2 and s q 2 were expanded using the binomial series in order to explicitly remove all square roots that would hide the polynomial nature of the final result. A symbolic programming language can be used to numerically expand the partition function as a polynomial in the variable x in the form (52). One must set the numerical precision in the calculation to somewhat more than n m ln 2 / ln 10 decimal digits in order to determine the exact values of the integer coefficients { g q } . The numerical calculation can be checked against the low-order result (53) or with an exact enumeration of energies on small lattices.

The low-temperature series for the Ising model on a 32 × 32 lattice is

(60) Q ˜ 32 , 32 ( K ) = 2 + 2048 x 4 + 4096 x 6 + 1057792 x 8 + 4218880 x 10 + 371621888 x 12 + 2191790080 x 14 + 100903637504 x 16 + 768629792768 x 18 + 22748079183872 x 20 + + 4096 x 2042 + 2048 x 2044 + 2 x 2048 ,

where the largest coefficient is

(61) g 512 = 6 , 342 , 873 , 169 , 001 , 916 , 568 , 766 , 443 , 273 , 025 , 000 , 331 , 593 , 063 , 924 , 436 , 135 , 196 , 680 , 443 , 689 , 656 , 478 , 072 , 741 , 300 , 511 , 612 , 123 , 900 , 652 , 711 , 596 , 311 , 283 , 701 , 724 , 071 , 226 , 144 , 241 , 851 , 411 , 641 , 714 , 893 , 727 , 789 , 741 , 510 , 169 , 213 , 344 , 005 , 116 , 385 , 197 , 594 , 692 , 089 , 556 , 614 , 547 , 788 , 150 , 860 , 200 , 720 , 413 , 211 , 442 , 412 , 355 , 672 , 291 , 841 , 364 , 265 , 145 , 274 , 980 , 444 , 405 , 423 , 129 , 672 , 679 , 584 , 959 , 498 , 234 , 944 , 801 , 613 , 246 , 300 , 853 , 599 , 317 , 229 , 362 , 316 ,

that is, there are about 6.342 × 10 306 configurations with energy halfway between the ferromagnetic and antiferromagnetic ordered states. This single microstate comprises 3.5% of the 21024 total configurations of the model. The exact results for the microcanonical entropy and the energy distribution for the 128 × 128 lattice are shown in Figures 13.19 and 13.20. These results provide excellent tests of Monte Carlo simulation methods, including broad histogram methods; see Beale (1996), Wang and Landau (2001), and Landau and Binder (2009). 2

Figure 13.19

Figure 13.19. Microcanonical entropy per spin S / N k = ln ( g q ) / n m for the two-dimensional Ising model on a 128   ×   128 lattice as calculated from equations (56), (57), and (59). The slope of the curve is proportional to the inverse temperature, so the state with q/nm = 1/2 represents the infinite-temperature state with energy halfway between the ordered ferromagnetic and antiferromagnetic states; the largest coefficient g 8192 ≃ 1.049 × 104930 is the number of configurations with q =nm/2 = 8192. Likewise, the states at q = 0 and q =nm represent the ferromagnetic and antiferromagnetic ground states, so the slopes of the curve diverge logarithmically in the thermodynamic limit.

Figure 13.20

Figure 13.20. The exact energy distribution P q for the two-dimensional Ising model on a 128   ×   128 lattice for K = 0.4, K =K c  ≃ 0.4407, and K = 0.5. The variance of the energy distribution is proportional to the specific heat, so it is largest near K =K c . Refer to Figure 13.17.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780081026922000223