ebook img

Right about time? PDF

0.5 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Right about time?

Right about time? Sean Gryb∗1,2 and Flavio Mercati†3 1Institute for Theoretical Physics, Utrecht University Leuvenlaan 4, 3584 CE Utrecht, The Netherlands 2Institute for Mathematics, Astrophysics and Particle Physics, Radboud University Huygens Building, Heyendaalseweg 135, 6525 AJ Nijmegen, The Netherlands 3Perimeter Institute for Theoretical Physics 31 Caroline Street North, Waterloo, ON N2L 2Y5, Canada. 3 Abstract 1 0 Have our fundamental theories got time right? Does size really matter? Or is physics 2 all in the eyes of the beholder? In this essay, we question the origin of time and scale by n reevaluatingthenatureofmeasurement. Wethenargueforaradicalscenario,supportedby a asuggestivecalculation,wheretheflowoftimeisinseparablefromthemeasurementprocess. J Our scenario breaks the bond of time and space and builds a new one: the marriage of time 0 1 and scale. ] c 1 Introduction q - r Near the end of the 19th century, physics appeared to be slowing down. The mechanics of g [ Newton and others rested on solid ground, statistical mechanics explained the link between the microscopic and the macroscopic, Maxwell’s equations unified electricity, magnetism, and light, 2 v and the steam engine had transformed society. But the blade of progress is double edged and, 8 as more problems were sliced through, fewer legitimate fundamental issues remained. Physics, 3 5 it seemed, was nearing an end. 1 Or was it? Among the few remaining unsolved issues were two experimental anomalies. As . 1 Lord Kelvin allegedly announced:“The beauty and clearness of the dynamical theory [...] is at 0 present obscured by two clouds.”[1] One of these clouds was the ultra–violet catastrophe: an 3 embarrassing prediction that hot objects like the sun should emit infinite energy. The other 1 : anomaly was an experiment by Michelson and Morley that measured the speed of light to be v i independent of how an observer was moving. Given the tremendous success of physics at that X time, it would have been a safe bet that, soon, even these clouds would pass. r a Never bet on a sure thing. The ultra–violet catastrophe led to the development of quantum mechanics and the Michelson–Morley experiment led to the development of relativity. These discoveries completely overturned our understanding of space, time, measurement, and the perception of reality. Physics was not over, it was just getting started. Fast–forwardahundredyearsorso. Quantummechanicsandrelativityrestonsolidground. ThemicrochipandGPShavetransformedsociety. Theseframeworkshaveledtoanunderstand- ingthatspansfromthemicroscopicconstituentsofthenucleustothelargescalestructureofthe Universe. The corresponding models have become so widely accepted and successful that they have been dubbed standard models of particle physics and cosmology. Resultantly, the number of truly interesting questions appears to be slowly disappearing. In well over 30 years, there have been no experimental results in particle physics that can’t be explained within the basic ∗[email protected][email protected] 1 framework laid out by the standard model. With the ever increasing cost of particle physics experiments, it seems that the data is drying up. But without input from experiment, how can physics proceed? It would appear that physics is, again, in danger of slowing down. Orisit? Althoughthenumber ofinterestingfundamentalquestionsappearstobedecreasing, the importance of the remaining questions is growing. Consider two of the more disturbing experimental anomalies. The first is the naturalness problem, i.e., the presence of unnaturally large and small numbers in Nature. The most embarrassing of these numbers – and arguably the worst prediction of science – is the accelerated expansion of the Universe, which is some 120 orders of magnitude smaller than its natural value. The second is the dark matter problem that just under 85–90 percent of the matter content of our Universe is of an exotic nature that we have not yet seen in the lab. It would seem that we actually understand very little of what is happening in our Universe! The problem is not that we don’t have enough data. The problem is that the data we do have does not seem to be amenable to explanation through incremental theoretical progress. The belief that physics is slowing down or, worse, that we are close to a final theory is just as as unimaginative now as it would have been before 1900. The lesson from that period is that the way forward is to question the fundamental assumptions of our physical theories in a radical way. This is easier said than done: one must not throw out the baby with the bath water. What is needed is a careful examination of our physical principles in the context of real experimental facts to explain more data using less assumptions. The purpose of this work is to point out three specific assumptions made by our physical theoriesthatmightbewrong. Wewillnotofferadefinitesolutiontotheseproblemsbutsuggest a new scenario, supported by a suggestive calculation, that puts these assumptions into a new light and unifies them. The three assumptions we will question are 1. Time and space are unified. 2. Scale is physical. 3. Physical laws are independent of the measurement process. We will argue that these three assumptions inadvertently violate the same principle: the requirement that the laws of physics depend only on what is knowable through direct mea- surement. They fall into a unique category of assumptions that are challenged when we ask how to adapt the scientific method, developed for understanding processes in the lab, to the cosmological setting. In other words, how can we do science on the Universe as a whole? We will not directly answer this question but, rather, suggest that this difficult issue may require a radical answer that questions the very origin of time. The flow of time, we will argue, may be fundamentally linked to the process of measurement. We will then support this argument with an intriguing calculation that recovers the black hole entropy law from a simple toy model. Before getting to this, let us explain the three questionable assumptions. 2 Three questionable assumptions Many of our most basic physical assumptions are made in the first week of physics education. A good example is one of the first equations we are taught: the definition of velocity, ∆x v = . (1) ∆t To give this equation precise operational meaning has been an outstanding issue in physics for its entire history. This is because, to understand this equation, one has to have an operational definition of both x, t, and ∆. Great minds have pondered this question and their insights have led to scientific revolutions. This includes the development of Newtonian mechanics, relativity, 2 and quantum mechanics.1 Recently, the meaning of x and, in particular, t, have been the subject of a new debate whose origin is in a theory of quantum gravity. This brings us to our first questionable assumption. 2.1 Time and space are unified Thetheoryofrelativitychangedourperceptionoftime. AsMinkowskiputitin1908[2], “space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.” Nowhere is this more apparent than in the main equation physicists use to construct the solutions of general relativity (GR): (cid:90) √ S = d4x(R+L ) −g . (2) Einstein-Hilbert matter Can you spot the t? It’s hidden in the 4 of d4x. But there are important structures hidden by this compact notation. We will start by pointing out an invisible minus sign in equation (2). When calculating spacetime distances, one needs to use x2+y2+z2−t2, (3) which has a − in front of the t2 instead of Pythagoras’ +. The minus sign looks innocent but has important consequences for the solutions of equation (2). Importantly, the minus sign implies causality, which means that only events in the past can effect what is going on now. This, in turn, implies that generic solutions of GR can only be solved by specifying information at a particular time and then seeing how this information propagates into the future. Doing the converse, i.e., specifying information at a particular place and seeing how that information propagates to another place, is, in general, not consistent.2 Thus, the minus sign already tells you that you have to use the theory in a way that treats time and space differently. There are other ways to see how time and space are treated differently in gravity. In Julian Barbour’s 2009 essay, The Nature of Time [3], he points out that Newton’s “absolute” time is not “absolute” at all. Indeed, the Newtonian notion of duration – that is, how much time has ticked by – can be inferred by the total change in the spatial separations of particles in the Universe. He derives the equation (cid:88) ∆t2 ∝ ∆d2, (4) i i where the d are inter–particle separations in units where the masses of the particles are 1. The i factor of proportionality is important, but not for our argument. What is important is that changesintimecanbeinferredbychangesindistancessothatabsolutedurationisnotaninput of the classical theory. This equation can be generalized to gravity where it must be solved at every point in space. The implications for the quantum theory are severe: time completely drops out of the formalism. Expert readers will recognize this as one of the facets of the Problem of Time [4]. The fact that there is no equivalent Problem of Space can be easily traced back to the points just made: time is singled out in gravity as the variable in terms of which the evolution equations are solved. This in turn implies that local duration should be treated as an inferred quantity rather than something fundamental. Clearly, time and space are not treated on the same footing in the formalism of GR despite the rather misleading form of equation (2). Nevertheless, it is still true that the spacetime framework is incredibly useful and, as far as we know, correct. How can one reconcile this fact with the space–time asymmetry in the formalism itself? We will investigate this in section (3.2). 1A lot to digest in the first week! 2Technically, the difference is in the elliptic versus hyperbolic nature of the evolution equations. 3 2.2 Scale is physical Before even learning the definition of velocity, the novice physicist is typically introduced to an even more primary concept that usually makes up one’s first physics lesson: units. Despite the rudimentary nature of units, they are probably the most commonly misunderstood concept in all of physics. If you ask ten different physicists for the physical meaning of a unit, you will likely get ten different answers. To avoid confusion, most theoreticians set all dimensionful constants equal to 1. However, one can’t predict anything until one has painfully reinserted these dimensionful quantities into the final result. And yet, no one has ever directly observed a dimensionful quantity. This is because all measurements are comparisons. A meter has no intrinsic operational meaning, only the ratio of two lengths does. One can call object A a meter and measure that object B is twice its length. Then, object B has a length of 2 meters but that tells you nothing about the intrinsic length of object A. If a demon doubled the intrinsic size of the Universe, the result of the experiment would be exactly the same. So, where do units come from? Some units, like the unit of pressure, are the result of emergent physics. We understand how they are related to more “fundamental” units like meters and seconds. However, even our most fundamental theories of Nature have dimensionful quantities in them. The standard model of particle physics and classical GR require only a singe unit: mass. Scale or, more technically, conformal invariance is then broken by the Higgs mass, which is related to all the masses of the particles in the standard model, and the Plank mass, which sets the scale of quantum gravity. As already discussed, there is a naturalness problem associated with writing all other constants of nature as dimensionless quantities. Thepresenceofdimensionfulquantitiesisanindicationthatour“fundamental”theoriesare notfundamentalatall. Instead,scaleindependenceshouldbeabasicprincipleofafundamental theory. As we will see in section (3.2), there is a formulation of gravity that is nearly scale invariant. The “nearly” will be addressed by the considerations of the next section. 2.3 Physical laws are independent of the measurement process There is one assumption that is so fundamental it doesn’t even enter the physics curriculum: the general applicability of the scientific method. We know that the scientific method can be applied in the laboratory where external agents (i.e., scientists) carefully control the inputs of somesubsystemoftheUniverseandobservethesubsystem’sresponsetotheseinputs. Wedon’t know, however, whether it is possible to apply these techniques to the Universe as a whole. On the other hand, when it comes to quantum mechanics, we do know whether our formalism can be consistently applied to the Universe. The answer is ‘NO’. The reasons are well understood – if not disappointingly under appreciated – and the problem even has a name: the measurement problem. The measurement problem results from the fact that quantum mechanics is a framework more like statistical physics than classical mechanics. In statistical physics, one has practical limitations on one’s knowledge of a system so one takes an educated guess at the results of a specific experiment by calculating a probability distribution for the outcome using one’s current knowledge of the system. In quantum mechanics, one has fundamental limitations on one’s knowledge of the system – essentially because of the uncertainty principle – so one can only make an educated guess at the outcome of a specific experiment by calculating a probability distribution for the outcome using one’s current knowledge of the system. However, it would be strange to apply statistical mechanics to the whole Universe3 because the Universe itself is only given once. It is difficult to imagine an ensemble of Universes for which one can calculate a probability distribution. The same is true in quantum mechanics, but the problem is worse. The framework itself is designed to give you a probability distribution for the outcome of some 3Believers in the Multiverse could substitute “Universe” for “Multiverse” in this argument. 4 measurement but how does one even define a measurement when the observer itself is taken to be part of the system? The answer is not found in any interpretation of quantum mechanics, although the problem itself takes a different form in a given interpretation. The truth is that quantummechanicsrequiressomeadditionalstructure,whichcanbethoughtofastheobserver, in order for it to make sense. In other words, quantum mechanics can never be a theory of the whole Universe. As a consequence of this, any approach to quantum gravity that uses quantum mechanics unmodified – including all major approaches to quantum gravity – is not, and can never be a theory of the whole Universe. It could still be used for describing quantum gravity effects on isolated subsystems of the Universe, but that is not the ambition of a full fledged quantum gravity theory. Given such a glaring foundational issue at the core of every major approach to quantum gravity, we believe that the attitude that we are nearing the end of physics is unjustified. The “shut–up and calculate” era is over. It is time for the quantum gravity community to return to these fundamental issues. One approach is to change the ambitions of science. This is the safest and most practical optionbutitwouldmeanthatscienceisinherentlyarestrictedframework. Theotherpossibility istotrytoaddressthemeasurementproblemdirectly. Inthenextsection, wewillgivearadical proposal that embraces the role of the observer in our fundamental description of Nature. To understand how this comes about, we need one last ingredient: renormalization, or the art of averaging. 3 A way forward 3.1 The art of averaging It is somewhat unfortunate that the great discoveries of the first half of the 20th century have overshadowed those of the second half of the century. One of these, the theory of renormal- ization, is potentially the uncelebrated triumph of 20th century physics. Renormalization was bornasratheruglysetofrulesforremovingsomeundesirablefeaturesofquantumfieldtheories. From these humble beginnings, it has grown into one of the gems of physics. In its modern form due to Wilson [5], renormalization has become a powerful tool for understanding what happens in a general system when one lacks information about the details of its fine behavior. Renormalization’s reach extends far beyond particle physics and explains, among other things, what happens during phase transitions. But, the theory of renormalization does even more: it helps us understand why physics is possible at all. Imaginewhatitwouldbelikeif,tocalculateeverydayphysicslikethetrajectoryofNewton’s apple, one would have to compute the motions of every quark, gluon, and electron in the apple and use quantum gravity to determine the trajectory. This would be completely impractical. Fortunately,onedoesn’thavetoresorttothis. High–schoolphysicsissufficienttodeterminethe motion of what is, fundamentally, an incredibly complicated system. This is possible because one can average, or coarse grain, over the detailed behavior of the microscopic components of the apple. Remarkably, the average motion is simple. This fact is the reason why Newtonian mechanics is expressible in terms of simple differential equations and why the standard model is made up of only a couple of interactions. In short, it is why physics is possible at all. The theory of renormalization provides a framework for understanding this. The main idea behind renormalization is to be able to predict how the laws of physics will change when a coarse graining is performed. This is similar to what happens when one changes themagnificationofatelescope. Withalargemagnification, onemightbeabletoseethemoons of Jupiter and some details of the structure of their atmospheres. But, if the magnification, or therenormalization scale, issteadilydecreased, theresolutionisnolongergoodenoughtomake out individual moons and the lens averages over these structures. The whole of Jupiter and 5 its moons becomes a single dot. As we vary the renormalization scale, the laws of physics that govern the structures of the system change from the hydrodynamic laws of the atmospheres to Newton’s law of gravity. The theory of renormalization produces precise equations that say how the laws of physics will change, or flow, as we change the renormalization scale. In what follows, we will propose that flow under changes of scale may be related to the flow of time. 3.2 Time from coarse graining Wearenowpreparedtodiscussanideathatputsourthreequestionableassumptionsintoanew light by highlighting a way in which they are connected. First, we point out that there is a way totrade aspacetimesymmetryforconformalsymmetrywithoutalteringthephysicalstructures of GR. This approach, called Shape Dynamics (SD), was initially advocated by Barbour [6] and was developed in [7, 8]. Symmetry trading is allowed because symmetries don’t affect the physical content of a theory. In SD, the irrelevance of duration in GR is traded for local scale invariance (we will come to the word “local” in a moment). This can be done without altering the physical predictions of the theory but at the cost of having to treat time and space on a different footing. In fact, the local scale invariance is only an invariance of space, so that local rods – not clocks – can be rescaled arbitrarily. Time, on the other hand, is treated differently. It is a global notion that depends on the total change in the Universe. In 2 spatial dimensions, we know that this trading is possible because of an accidental mathematical relationship between the structure of conformal symmetry in 2 dimensions and the symmetries of 3 dimensional spacetime [9].4 We are investigating whether this result will remain true in 3 spatial dimensions. If it does, it would mean that the spacetime picture and the conformal picture can coexist because of a mere mathematical accident. We now come to a key point: in order for any time evolution to survive in SD, one cannot eliminate all of the scale. The global scale of the Universe cannot be traded since, then, no time would flow. Only a redistribution of scale from point to point is allowed (this is the significance of the word “local”) but the overall size of the Universe cannot be traded. In other words, global scale must remain for change to be possible. How can we understand this global scale? Consider a world with no scale and no time. In this world, only 3 dimensional Platonic shapes exist. This kind of world has a technical name, it is a fixed point of renormalization – “fixed” because such a world does not flow since the renormalization scale is meaningless. This cannot yet be our world because nothing happens in this world. Now, allow for something to happen and call this “something” a measurement. One thing we know about measurements is that they can never be perfect. We can only compare the smallest objects of our device to larger objects and coarse grain the rest. Try as we may, we can never fully resolve the Platonic shapes of the fixed point. Thus, coarse graining by real measurements produces flow away from the fixed point. But, what about time? How can a measurement happen if no time has gone by? The scenario that we are suggesting is that the flow under the renormalization scale is exchangeable with the flow of time. Using the trading procedure of SD, the flow of time might be relatable to renormalization away from a theory of pure shape. In this picture, time and measurement are inseparable. Like a diamond with many faces, scale and time are different reflections of a single entity. This scenario requires a radical reeval- uation of our notions of time, scale, and measurement. To be sure, a lot of thought is still needed to turn this into a coherent picture. A couple of commentsareinorder. Firstly, someauthors[10,11]haveinvestigatedasimilarscenario, called holographic cosmology using something called gauge/gravity duality. However, our approach suggests that one may not have to assume gauge/gravity duality for this scenario but, instead, 4Technically, this is the isomorphism between the conformal group in d spatial dimensions and the deSitter group in d+1 dimensions. 6 Extended configuration space Shape Space Figure 1: Each point in Shape Space is a different shape (represented by triangles). These cor- respondtoanequivalenceclass(representedbyarrows)ofpointsoftheExtendedConfiguration Space describing the same shape with a different position, orientation, and size. can make use of symmetry trading in SD. Furthermore, our motivation and our method of implementationismoreconcrete. Secondly,whyshouldweexpectthatthereisenoughstructure in a coarse graining of pure shapes to recover the rich structure of spacetime? A simple answer is the subject of the next section.5 4 The size that matters In this section, we perform a simple calculation suggesting that the coarse graining of shapes describedinthelastsectioncouldleadtogravity. Thissectionismoretechnicalthantheothers butthisisnecessarytosetupourfinalresult. Bravesoulscanfindthedetailsofthecalculations in the Technical Appendix (A). We will consider a simple “toy model” that, remarkably, recovers a key feature of gravity. Our model will be a set of N free Newtonian point particles. To describe the calculation we will need to talk about two spaces: Shape Space and Extended Configuration Space (ECS). Shape Space is the space of all the shapes of the system. If N = 3, this is the space of all triangles. ECS is the space of all Cartesian coordinates of the particles. That is, the space of all ways you can put a shape into a Cartesian coordinate system. The ECS is larger than Shape Space because it has information about the position, orientation, and size of the shapes. Although this information is unphysical, it is convenient to work with it anyway because the math is simpler. This is called a gauge theory. We can work with gauge theories provided we remove, or quotient, out the unphysical information. To understand how this is done, examine Figure (1) which shows schematically the relation between the ECS and Shape Space. Each point on Shape Space is a different shape of the system, like a triangle. All the points along the arrows represent the same shape with a different position, orientation, or size. By picking a representative point along each arrow, we get a 1–to–1 correspondence between ECS and Shape Space. This is called picking a gauge. Mathematically, this is done by imposing constraints on 5FM and M. Lostaglio are exploring a related approach [12]. 7 Figure 2: Left: Approximation of a line using a grid. Right: Further approximation of the line as a strip of thickness equal to the grid spacing. the ECS. In our case, we need to specify a constraint that will select a triangle with a certain center of mass, orientation, and size. For technical reasons, we will assume that all particles are confined to a line so that we don’t have to worry about orientation. To specify the size of the system, we can take the “length” of the system, R, on ECS. This is the moment of inertia. By fixing the center of mass and moment of inertia in ECS, we can work indirectly with Shape Space. The main advantage of doing this is that there is a natural notion of distance in ECS. This can be used to define the distance between two shapes, which is a key input of our calculations. To describe the calculation, we need to specify a notion of entropy in Shape Space. Entropy canbethoughtofastheamountofinformationneededtospecifyaparticularmacroscopicstate of the system. To make this precise, we can use the notion of distance on ECS to calculate a “volume” on Shape Space. This volume roughly corresponds to the number of shapes that satisfy a particular property describing the state. The more shapes that have this property, the more information is needed to specify the state. The entropy of that state is then related to its volume, Ω , divided by the total volume of Shape Space, Ω . Explicitly, m tot Ω m S = −k log , (5) B Ω tot where k is Boltzmann’s constant. B We will be interested in states described by a subsystem of n < N particles that have a certain center of mass x and moment of inertia, r. To make sense of the volume, we need a 0 familiar concept: coarse graining. We can approximate the volume of the state by chopping up the ECS into a grid of size (cid:96). Physically, the coarse graining means that we have a measuring device with a finite resolution given by (cid:96). Consider a state that is represented by some surface in ECS. This is illustrated in Figure (2) by a line. The volume of the state is well approximated by counting the number of dark squares intersected by the line. In the Technical Appendix (A), we calculate this volume explicitly. The result is (cid:18) (cid:18) (cid:19) (cid:19)N−n−2 m m 2 Ω ∝ (cid:96)2 rn−2 R2−r2− 1+ x2 , (6) m M −m M 0 where M and R are the total mass and moment of inertia of the whole system and m is the mass of the subsystem. We can then compare this volume to the total volume of Shape Space, which goes like the volume of an N −1 dimensional sphere (the −1 is because of the center of mass gauge fixing). Thus, Ω ∝ RN−1. (7) tot The resulting entropy is 1 N (cid:16)r (cid:17)2 r S = k − k log +.... (8) B B 2 n R R 8 Remarkably, the first term is exactly the entropy of a black hole calculated by Bekenstein and Hawking [13, 14]. More remarkably, the second term is exactly the first correction to the Bekenstein–Hawking result calculated in field theory [15, 16]. Erik Verlinde [17] discovered a way to interpret Newtonian gravity as an entropic force for systems whose entropy behaves in this way. It would appear that this simple model of a coarse graining of pure shapes has the right structure to reproduce Newtonian gravity. 5 Conclusions We have questioned the basic assumptions that: i) time and space should be treated on the same footing, ii) scale should enter our fundamental theories of Nature, and iii) the evolution of the Universe is independent of the measurement process. This has led us to a radical proposal: that time and scale emerge from a coarse graining of a theory of pure shape. The possibility that gravity could come out of this formalism was suggested by a simple toy model. The results of this model are non–trivial. The key result was that the entropy (8) scales like r2, which, dimensionally, is an area. In three dimensions, this is the signature of holography. Thus, in this simple model, Shape Space is holographic. If this is a generic feature of Shape Space, it would be an important observation for quantum gravity. Moreover, the toy model may shed light on the nature of the Plank length. In this model, the Plank length is the emergent length arising in ECS given by R2 L2 = G(cid:126) ∝ . (9) Planck N This dimensionful quantity, however, is not observable in this model. What is physical, instead, it the dimensionless ratio r/R. This illustrates how a dimensionful quantity can emerge from a scale independent framework. Size doesn’t matter – but a ratio of sizes does. The proof could be gravity. 9 A Technical Appendix TheextendedconfigurationspaceisRN: thespacecoordinates, r , (i = 1,...,N)ofN particles i in 1 dimension. To represent the reduced configuration space, or Shape Space, we can use a gauge fixing surface. To fix the translations, we can fix the center of mass to be at the origin of the coordinate system: N (cid:88) m r = 0 . (center of mass at the origin) (10) i i i=1 The equation above gives three constraints selecting three orthogonal planes through the origin whose orientation is determined by the masses m . A natural gauge–fixing for the generators i of dilatations is to set the moment of inertia with respect to the center of mass to a constant6 (the weak equation holds when the gauge–fixing (10) is applied): N (cid:88) mimj |r −r |2 ≈ (cid:88) mi |r |2 = R2 . (fixed moment of inertia) (11) M2 i j M i i<j i=1 The last relation defines a sphere in RN centered at the origin. Thus, Shape Space is the intersection of the N −1-dimensional sphere (11) with the three orthogonal planes (10). The flat Euclidean metric, ds2 = m δ δ dra drb, is the natural metric on the extended i ij ab i j configuration space Q. This metric induces the non–flat metric (cid:12) ds2 = m δ δ dra drb(cid:12) . (12) induced i ij ab i j(cid:12) QS on Shape Space. A.1 Description of a macrostate in Shape Space Consider an N–particle toy Universe with an n–particle subsystem, n < N. The particles in the subsystem have coordinates x = r , (i = 1,...,n), while the coordinates of all the other i i particles will be called y = r , (i = 1,...,N −n). It is useful to define the coordinates of i n+i the center of mass of the subsystem and of the rest of the Universe:7 n N−n n (cid:88) mi (cid:88) mn+i (cid:88) x = x , y = y , m = m , (13) 0 i 0 i i m M −m i=1 i=1 i=1 and the center–of–mass moment of inertia of the two subsystems n N−n r = (cid:88) mi |x −x |2 , r(cid:48) = (cid:88) mn+i |y −y |2 . (14) i 0 i 0 M M i=1 i=1 Therelationbetweenthemomentsofinertiaofthetotalsystemandthoseofthetwosubsystems is (cid:18) (cid:19) m m R2 = r2+(r(cid:48))2+ 1+ x2 . (15) M −m M 0 We define a macrostate as a state in which the moment of inertia of the subsystem, r, and its center of mass, x , are constant. To calculate the Shape Space volume of such a macrostate, 0 6We are using here the notion of moment of inertia with respect to a point, which we rescaled by the total (cid:80) mass M = m to give it the dimensions of a squared length. i i 7Noticethatthetwosetsofcoordinatesmustsatisfytherelationmx +(M−m)y =0inordertokeepthe 0 0 total center of mass at the origin. 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.