QM II - W2

Photo by Jeremy Bishop on Unsplash
$\newcommand{\dede}[2]{\frac{\partial #1}{\partial #2} } \newcommand{\dd}[2]{\frac{d #1}{d #2}} \newcommand{\divby}[1]{\frac{1}{#1} } \newcommand{\typing}[3][\Gamma]{#1 \vdash #2 : #3} \newcommand{\xyz}[0]{(x,y,z)} \newcommand{\xyzt}[0]{(x,y,z,t)} \newcommand{\hams}[0]{-\frac{\hbar^2}{2m}(\dede{^2}{x^2} + \dede{^2}{y^2} + \dede{^2}{z^2}) + V\xyz} \newcommand{\hamt}[0]{-\frac{\hbar^2}{2m}(\dede{^2}{x^2} + \dede{^2}{y^2} + \dede{^2}{z^2}) + V\xyzt} \newcommand{\ham}[0]{-\frac{\hbar^2}{2m}(\dede{^2}{x^2}) + V(x)} \newcommand{\konko}[2]{^{#1}\space_{#2}} \newcommand{\kokon}[2]{_{#1}\space^{#2}} $ # Content $\newcommand{\L}{\mathcal L}$ ### Repe Hamiltonian mechanics I researched a bit about intuitions for Hamiltonian mechanics and found some better ways to conceptualize it, and also canonical quantisation. Let's take a look again at the two + one defining equations of Hamiltonian mechanics: $$\dot q = \dede{H}{p}$$ $$\dot p = -\dede{H}{q}$$ and $$\dd{A}{t} = \{A,H\} + \dede{A}{t}$$ We will derive the above equation from the first two to get a better feel for what Hamiltonian mechanics actually _does_ #### Derivation We start from $dA$ the total differential of $A$. $$dA(q,p,t) = \dede{A}{q} dq + \dede{A}{p}dp + \dede{A}{t}dt$$ We can imagine this as a decomposition of the difference vector in $A$ into it's different components $dq, dp, dt$. The three sources of change in $A$. We now want to find the total derivative of $A$ w.r.t. $t$. $$\dd{A}{t} = \dede{A}{q} \underbrace{\dd{q}{t}}_{\dot q = \dede{H}{p}} + \dede{A}{p}\underbrace{\dd{p}{t}}_{-\dot q \dede{H}{q}} + \dede{A}{t}\underbrace{\dd{t}{t}}_{1}$$ Plugging in the two Hamilton equations we obtain the familliar equation for time evolution under the hamiltonian. $$\dd{A}{t} = \{A,H\} + \dede{A}{t}$$ In some sense the argument to find the time evolution of $A$ was "geometric". #### The role of $t$ Note how $t$, kind of "magically" entered the equation above via the defining equations. The magic happened when we found that $\dede{H}{p}$ and $\dede{H}{q}$ lead to derivatives in $t$. Note that we need derivatives in $t$ to do evolution/displacement in $t$. Via integration $A \approx \int \dd{A}{t}$ #### The Nöther theorem & Flows We know from the Nöther theorem that if $A$ and $B$ are conjugate variable pairs, and $A$ is a symmetry of the Hamiltonian, then $B$ is a conserved quantity. Or mathematically; $$(\dede{H}{A} = \dd{B}{t} \text{\& vice versa}) :\{A,H\} = 0 \iff \dd{A}{t}=0 \text{ \& vice versa}$$ We can interpret this statement geometrically: Consider the vector field $X_{H}$ generated by $H$ via the Hamiltonian equation. $X_{H} =(\dd{H}{p}, -\dd{H}{q}) = (\dot q, \dot p)$ link to interactive visualization This gives us a visualization on how a state (a point in the $q,p$ plane) will evolve over time. This vector field is said to generate the flow in $H$. There are now two things we can do with this flow. - Show the trajectories by following the flow. - Show the "isobars" by going orthogonal to the flow. Note that because trajectories are orthogonal to "isobars" the oposite is also true. ##### What is an "isobar" We know that following the flow is time evolution. We know that for any starting position, we'll get a trajectory. Let's consider the case of the harmonic oscillator. We see that we get "rings" of trajectories, one ring per initial condition. We note that each ring is characterized by its zero momentum crossing (or any other point in phase space it goes trough for that matter). So every ring has a specific energy! The isobars of the Hamiltonian flow are Energy. This gives us the dual pair of time and energy! #### Other flows: Now we consider general flows $X_{A}$ we can again check what such a flow would look like. $X_{A} =(\dd{A}{p}, -\dd{A}{q})$ Before with energy and time we saw that moving along the flow $X_{H}$ was done by changing $t$. While moving orthogonal to the flow was done by changing $E$. We expect our flow $X_{A}$ to decomopse similarly. So moving along the flow would mean chaning "$t_{A}$". While moving orthogonal to the flow would mean changing $E_{A}$ Note that for the energy/ time combination flowing along the flow was a good "state variable", while going orthogonal to the flow was a way to switch between different system instantiations (initial conditions). For energy this meant that because $t$ and $E$ "move orthogonally", a time evolution conserves the energy. Transfering this to the flow $X_{A}$ means that evolving in $t_{A}$ conserves $A$. ##### Example: Momentum conservation Consider the evolution under $X_{p}$. The shape of $X_{p}$ is then: $$X_{p}=(\dd{p}{p}, -\dd{p}{q}) = (1,0) = (\dd{q}{t_{p}}, \dd{p}{t_{p}})$$ We see that $p$ doesn't change in $t_{p}$ and $q$ changes proportionally to $t_{p}$. This leads to the hunch that $t_{p}$ could be just $q$. (Which it turns out to be correct) We see that a system that conserves $p$ will have a "flow" which you can move along by changing $q$. We can verify that $p$ is a symmetry of a free particle by plotting the flows for both scenarios (or by computing the poisson bracket) We see that the flows are parallel, which means that they will keep the same variables constant. We found that for a system with a kinetic energy $\frac{1}{2m}p^{2}$, which we know to be conserved by time evolution. There is a second quantity $p$ which is also conserved due to the translation symmetry of the system. This is the Nöther theorem! #### Finding the actual time evolution To find the actual time evolution given the flow $X_{H}$ we need to solve the hamilton differential equations: $$\dd{H}{p} = \dot q$$ $$\dd{H}{q} = -\dot p$$ We can try this for the free particle: $$\dd{H}{p} = \frac{p}{m} = \dot q$$ $$\dd{H}{q} = 0 = -\dot p$$ We want $q(t)$ so we need to integrate: $q(t) = q(0) + \frac{tp}{m}= q(0)+ v\cdot t$ Which is exactly the expected solution. #### Propagator approach We want to formulate the time evolution using a propagator $U(t)$ Using the Ansatz $\begin{pmatrix}q(t)\\ p(t)\end{pmatrix} = U(t)\begin{pmatrix}q(0)\\ p(0)\end{pmatrix}$ we can try to solve the hamilton equations: We first take the time derivative of the above expression: $$\begin{pmatrix} \dot q \\ \dot p \end{pmatrix} = \dot U\begin{pmatrix} q(0) \\ p(0) \end{pmatrix} $$ We then note that we can also write this as $$\begin{pmatrix} \dd{H(t)}{p} \\ -\dd{H(t)}{q} \end{pmatrix} = \dot U\begin{pmatrix} q(0) \\ p(0) \end{pmatrix} $$ We now define the "Hamilton application operator": $\mathcal H: \begin{pmatrix} q \\ p\end{pmatrix} \to \begin{pmatrix} \dd{H(t)}{p} \\ -\dd{H(t)}{q} \end{pmatrix}$ We can thus write $$ \mathcal H U(t) \begin{pmatrix} q(0) \\ p(0) \end{pmatrix} = \dot U(t)\begin{pmatrix} q(0) \\ p(0) \end{pmatrix} $$ We thus have the operator equation: $\mathcal H U = \dot U$ Which should look pretty familiar. (in qm we had $d_{t}U = \frac{1}{i\hbar} H_{s}U$) We can solve this with a bit of abuse of notation by using the ansatz: $U = e^{Xt}$ Which solves the problem as: $U(t) = e^{\mathcal H t}$ Which should also be familiar from QM. It should additionally be familiar from MMP II. Because what we just did is connect elements from the lie algebra ($\mathcal H$) to elements of the lie group $U(t)$ ### MMP II Recap So we saw some formulae which looked very familiar to all the people who visited MMP II. I'm not gonna do a full MMP II course in this lecture, but I'll try to remind you of some important results. ##### Lie Groups & Algebras In the second part of MMP II we were confronted with a very ugly problem. Up to that point you had been looking at groups with finitely many elements. For those finite groups we found that we can take complicated group math (with multiplication tables etc) to much simpler linear algebra via representation theory. We did this because we wanted to bring the nice linalg properties such as addition and eigenspaces to group theory. But now we were confronted with a big problem: Infinite groups. ###### Infinite groups You probably saw first the rotation group in 2D. There are just infinitely many rotations you can make there. Enter > The lie algebra We were looking for an object that can parametrize my group in a nice way. Where nice means that things I do in the group correspond to nice things in the algebra. To quickly know if a thing is in the lie group or the lie algebra ask yourself the following questions: Which we can do via *multiple choice* - Can it be added? -> Algebra - Does it have a commutator -> Algebra - Does it act mainly by multiplication/composition -> Group - Does it "live in the exponential" of something -> Algebra ###### The QM Now how does this relate to QM? Well there are two connections - Symmetry - Time evolution We saw that time evolution was kind of complicated. It's not very straight forward to combine two time evolutions $U$ into one. The thing is all the $U$'s have a quite complicated Group structure (some $U$ and $\tilde U$ might combine into $U'$) They may or may not commute. etc.... We wanted to find a nice way to parametrize the groups. From Theoretical Mechanics we knew that parametrisations (I called them "fake times") are directly linked with symmetries/ conservation laws. (I called them "fake energies" or isobars). Going to quantum mechanics via this "Operator quantisation" is the standard approach called _Canonical quantisation_. There are alternative approaches (as you are seeing right now in the lecture), which take a different object as central to quansiation: - The path. That they are related in some sense is clear from Theoretical Mechanics, because there we saw that paths follow the flows generated by conservation laws. And are parametrized by the "fake times". ###### Lie math The lie algebra to a lie group is defined as follows: $\mathfrak g = Lie(G) = \{ X | e^{Xt} \in G \forall t \in \mathbb R \}$ We essentially transform multiplications in $G$ to additions in $\mathfrak g$ via the exponential map. ##### Campbell Baker Hausdorff Sadly the exponential map is not as nice as for real and complex numbers, because the lie algebra is not necessarily commutative! We thus need a different way to do exponential simplifications: $$e^{tX}e^{tY} = \exp ( tX + tY + \frac{t^{2}}{2} [X,Y] + \mathcal O(t^{3}))$$ $$\mathcal O(t^{3}) = \frac{t^{3}}{12}([X,[X,Y]] + [Y,[Y,X]]) + \mathcal O(t^{4})$$ This is good because we can now use "almost-commutations" to do the exponential multiplication. One typical example is momentum and position. Because $[x,p] = i\hbar$. And complex numbers commute with operators. We get zero for any of the terms of order $3$ or higher. Thus we actually get an exact multiplication: $e^{tx}e^{tp}=\exp(tx + tp + \frac{t^{2}}{2}i\hbar + 0)$ # Lecture ## Repe Path integral In the lecture we saw two characerisations of $K$ the propagation kernel: - We first saw that is simply the matrix element of U $K(t_{f},q_{f},t_{i},q_{i}) = \bra {q_{f}} U \ket{q_{i}}$. For a free particle we found that $K$ can also be written as an exponential of the lagrangian. - We also saw that we can expand $K$ as a series of interals over paths: $$K(t_{f}-t_{i},q_{f}) = \int dq_{1} \cdots \int dq_{N-1} K(\epsilon, q_{f},q_{N-1} K(\epsilon, q_{N-1}, q_{N-2}) \cdots K(\epsilon, q_{1},q_{i}))$$   Where we remember that the integrals $q$ were "slits", so the integral was orthogonal to the propagation direction. I.e. the integral selects a path (and does not go along it) The exponential formulation and the integral over paths formulation fit neatly together. (because we want to make sums out of products in the integral over paths formalism). > Sidenote: What we are doing here is we realized that the problem is difficult on the Group of propagation kernels, so we moved it to the Lie Algebra (which happens to be the lagrangians) $$K(t_{f}-t_{i}, q_{f},q_{i}) = \lim_{N\to\infty} C^{N}_\varepsilon \int dq_{1} \cdots \int dq_{N-1} \exp( \frac{i}{\hbar}\varepsilon \sum\limits_{j=0}^{N-1} L_\varepsilon(q_{j+1}, q_{j})) $$ Taking the limits and wrapping it all up gives us the neat formula: $$K(t_{f}-t_{i}, q_{f}, q_{i}) = \int_{q_{i},t_{i}\to q_{f}t_{f}} D\gamma e^{\frac{i}{\hbar}S[\gamma]}$$ Where the notation $D\gamma$ means that we pick any possible path $\gamma$ between the starting and end points. ### How do we understand this? Let's go backt to the classical case of raylight. Where we had the "action" $S[\gamma] = \int_{\gamma} n(q) d\gamma$. The problem is this thing doesn't actually have the units of an action. (It's missing a momentum). We now patch in a momentum in a slightly strange way: #### De broglie optical mass: Remember the debroglie wavelength: - $\lambda = \frac{h}{p}$ We now do the oposite and associate a momentum with light $p = \frac{h}{\lambda}$ #### Corrected light-ray-action We now write the unit corrected ligth-ray-action: $$S[\gamma] = \int_{\gamma}n(q) \frac{h}{\lambda(q) }d\gamma$$ For simplicity first we consider just one medium. $S$ is then proportional to the real path length. Remember $K$ specifies the probability to find a particle at $q_{f}$ after a time $\Delta t$ in case we started at $q_{i}$ The probability is going to be highest when the integrals add up. That is when $e^{\frac{i}{\hbar} S[\gamma]}$ are in phase. Plugging in our formula for $S$ we obtain: $$\exp(\int_{\gamma} n(q)\frac{i}{\hbar} \frac{h}{\lambda(q)} d\gamma)$$ $$\exp(\int \frac{2\pi i}{\lambda(q)} d\gamma)$$ Which means that paths need to be within a wavelength of each other. Which happens to be wave optics! #### Back to matter The situation for matter interference is similar. The only real difference is that matter has an action which is quadratic in momentum, as oposed to the linear relation for light. We see this difference when considering the energy of a photon: $E = h\nu = \frac{hc}{\lambda} = c p_{\nu}$ Whereas a massive particle will have $E = \frac{1}{2m}p^{2}$ This leads to different dispersion relations: For Light: $\omega(k) \propto k$ For Matter: $\omega(k) \propto k^{2}$ Which means that while for light: time and space "phase shift" are equal. For matter: The phase shift due to time and space are different. This leads to matter being able to travel a different speeds, while light always propagates at the same speed (in vacuum) ### Dualities We saw a few types of dualities in this lecture: - Ray / Wave - "fake time" & "fake energy" - Path & crossing points In most of the approaches to quantisation we explit this duality.