Variational Quantum Eigensolver Tutorial-1.Basics of the Quantum Module

Variational Quantum Eigensolver (VQE) is a hybrid quantum/classical algorithm which allows you to find the eigenvalues of a matrix H. VQE may be used for quantum chemistry simulation and solving combinatorial optimization problems. For example, Travelling Salesman Problem. For many of the problems, the matrix H is the Hamiltonian of the system. Before the research of VQE, quantum algorithms such as Quantum Phase Estimation (QPE) have been proposed to estimate the eigenvalues of a unitary matrix. Although QPE only requires O(1) repetitions of circuits, the circuit size scales exponentially in the number of bits of precision. Therefore, QPE is ill-fitted for the Noise-Intermediate-Scale-Quantum (NISQ) systems which only has tens to hundreds of noisy qubits. In order to have a quantum eigenvalue solving algorithm that is suitable for the NISQ systems, researchers presented the VQE algorithm. VQE is a hybrid algorithm that consists of a quantum expectation value estimation module together with a classical optimizer. The circuit depth of the quantum module is very low hence it is suitable for the NISQ systems. In this series of articles, I will cover the following topics: (1) Basics of the Quantum Module, (2) Ansatz and the Classical Optimizer, (3) Solving Max-Cut Problem with VQE, (4) Writing VQE Programs in Qiskit.

Further Reading: For precision \varepsilon QPE requires O(1) iterations with circuit depth O(1/\varepsilon) , whereas each quantum module within VQE requires O(\frac{1}{\varepsilon^2}) samples from circuit with depth O(1) [1]. There is a tradeoff between the circuit depth and the number of samples/iterations. Recently, researchers have proposed a generalized VQE algorithm that acts as a hybrid approach of VQE and QPE. This algorithm has an adjustable parameter to control the circuit depth and the number of samples required.

Basics of the quantum module

In quantum theory, an expectation value of observable H in state \left|\psi\right> is denoted as \left<\psi\right|H \left|\psi\right> .

According to spectral decomposition, the matrix H can be represented as:

H = \lambda_1\left|\psi_1\right>\left<\psi_1\right| + \lambda_2\left|\psi_2\right>\left<\psi_2\right| + ... + \lambda_n\left|\psi_n\right>\left<\psi_n\right|

where \lambda_i and \left|\psi_i\right> are the eigenvalues and eigenstates of matrix H. The eigenstates \left|\psi_i\right> are orthorgonal, they form a complete basis and \left<\psi_i|\psi_j\right> = 0 if i \neq j. The state \left|\psi\right> can also be written as a superposition of the eigenstates: \left|\psi\right> = \alpha_1 \left|\psi_1\right> + \alpha_2 \left|\psi_2\right> + ... + \alpha_n \left|\psi_n\right> . The expectation value of observable H is:

\left<\psi\right|H \left|\psi\right> = \alpha_1^{\dagger}\alpha_1\lambda_1 + \alpha_2^{\dagger}\alpha_2\lambda_2 + ... + \alpha_n^{\dagger}\alpha_n\lambda_n \geq \lambda_{min}

Here \lambda_{min} is the minimum eigenvalue (also referred to as the ground state energy). If we can minimize the \left<\psi\right|H \left|\psi\right> , we can get the minimum eigenvalue \lambda_{min} and the corresponding eigenstate \left|\psi_{min}\right> . This is actually how VQE works!

Sketch of the VQE algorithm

Now we can have a sketch of the VQE algorithm:

  • First, we start with an initial guess of the eigenstate \left|\psi_0\right> and we measure the expectation value of H in that state \left<\psi_0\right|H \left|\psi_0\right> .
  • Then, we change the state a little bit and perform the measurement again. If the expectation value is decreasing, we know that we are in the right direction and we can keep on changing the state and measuring expectation value until we have the minimum value \left<\psi_{min}\right|H \left|\psi_{min}\right> . This expectation value is the ground state energy and the corresponding state is the ground state. Problem solved!

See, the idea is easy to follow. But there are several questions need to be answered: (1) The state-of-the-art quantum computers only provide measurements in the computational basis ( \left|0\right> and \left|1\right> ). How to measure the expectation value on these systems? (2) How how to change the quantum state to minimize the expectation value?

For the second question, the minimizing process is controlled by the classical optimizer and I will explain it in the next article.

Measuring expectation value in the computational basis

To answer the first question, let’s recall the properties of Hamiltonian H. The products of Pauli matrices form a basis for the Hermitian H (when the dimension is powers of 2). We can decompose H into:

H = \sum_{i\alpha}h_{\alpha}^{i}\sigma _{\alpha}^{i} + \sum_{ij\alpha\beta} h_{\alpha\beta}^{ij}\sigma _{\alpha}^{i} \sigma _{\beta}^{j} + ...

h are the real coefficients identify the subspace on which the operator acts, \alpha identify the Pauli operator. In other words, \sigma \equiv ( I, \sigma_{x}, \sigma_{y}, \sigma_{z}) . \sigma_{x} is also referred to as Pauli X operator. We have:

I = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, \sigma_{x} = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}, \sigma_{y} = \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix}, \sigma_{z} = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}

The expectation value of observable H is:

\left<\psi\right|H \left|\psi\right> = \sum_{i\alpha}h_{\alpha}^{i} \left<\psi\right| \sigma _{\alpha}^{i} \left|\psi\right> + \sum_{ij\alpha\beta} h_{\alpha\beta}^{ij} \left<\psi\right| \sigma _{\alpha}^{i} \sigma _{\beta}^{j} \left|\psi\right> + ...

We can get the expectation value of H by measuring the expectation value of the Pauli strings. For the Hamiltonians we are interested in (quantum chemistry, Ising model, combinatorial optimization problems and so on), the number of Pauli strings is polynomial in the size of the system. Therefore, we can efficiently measure the expectation value of H by measuring the Pauli strings. 

But how to measure the Pauli strings in the computational basis? Let’s check the probability disribution when measuring \left|\psi\right> state in the computational basis:

  • Probability of getting \left|0\right> : P(0) = \left | \left<0|\psi\right> \right |^2 = \left<\psi|0\right> \left<0|\psi\right>
  • Probability of getting \left|1\right> : P(1) = \left | \left<1|\psi\right> \right |^2 = \left<\psi|1\right> \left<1|\psi\right>

The Pauli matrices can be written interms of outer product of qubit states, then the expectation values areļ¼š

  • \left<\psi\right| I \left|\psi\right> = \left<\psi|(|0\right> \left<0| + |1\right> \left<1|) |\psi\right> = \left<\psi|0\right> \left<0|\psi\right> + \left<\psi|1\right> \left<1|\psi\right> = P(0) + P(1)
  • \left<\psi\right| \sigma_z \left|\psi\right> = \left<\psi|(|0\right> \left<0| - |1\right> \left<1|) |\psi\right> = \left<\psi|0\right> \left<0|\psi\right> - \left<\psi|1\right> \left<1|\psi\right> = P(0) - P(1)

That is to say, in order to measure \left<\psi\right| I \left|\psi\right> and \left<\psi\right| \sigma_z \left|\psi\right> , all we need to do is prepare the state \left|\psi\right> and measrue it in the computational basis. The probability distribution will tell us the expectation value.

What about \left<\psi\right| \sigma_x \left|\psi\right> and \left<\psi\right| \sigma_y \left|\psi\right> ? The Pauli matrices have following relationship: \sigma_x = \Eta^\dagger \sigma_z\Eta, and \sigma_y = S \Eta ^\dagger \sigma_z \Eta S^\dagger. \Eta and S^\dagger are the unitary matricies for the Hadamard gate \Eta and Phase change gate S^\dagger . The expectation value of \sigma_z is: \left<\psi\right| \sigma_x \left|\psi\right> = \left<\psi|\Eta ^\dagger\sigma_z\Eta|\psi\right> . Remeber that we have \left<\psi\right| \sigma_z \left|\psi\right> = P(0) - P(1) . The only differece is we changed the state from \left|\psi\right> to \Eta\left|\psi\right> . Therefore, measuring \sigma_x is equivalent to applying a Hadamard gate to the state and measure in the computational basis. Measuring \sigma_y is equivalent to applying a phase changing gate S^\dagger followed by a Hadamard gate and then measure in the computational basis.

Further reading: An interesting discovery is that I and \sigma_z can be measured from the same basis while \sigma_x and \sigma_y requires different basis. This means I and \sigma_z can be measured simultaneously from the same basis while \sigma_x and \sigma_z can’t. The theory behind this discovery is the commutativity of the Pauli group. Actually, two observables share an eigenbasis if and only if they commute. [I,\sigma_z] = I \sigma_z - \sigma_z I = 0 \rightarrow I \sigma_z = \sigma_z I . If we have a large number of Pauli strings, it is important to classify them into subsets with common basis to reduce the total number of measurements. However, the underlying problem is NP-hard [2]. Researchers introduced a systematic technique for minimizing the requisite state preparations.

Conclusion

Now, we know how to measure the expectation values on a quantum computer. The key idea of the VQE algorithm is iteratively adjusting the quantum state and measuring the expectation value to reach the ground state. However, how can we prepare the quantum state \left|\psi\right>? How can we tune the quantum state such that it will reach the ground state? I will address these problems in the following article “Tutorial-2. Ansatz and the Classical Optimizer”.

References

  • [1] Wang, Daochen, Oscar Higgott, and Stephen Brierley. “Accelerated variational quantum eigensolver.” Physical review letters 122.14 (2019): 140504.
  • [2] Gokhale, Pranav, et al. “Minimizing state preparations in variational quantum eigensolver by partitioning into commuting families.” arXiv preprint arXiv:1907.13623 (2019).
Posts created 1

Leave a Reply

Your email address will not be published. Required fields are marked *

Begin typing your search term above and press enter to search. Press ESC to cancel.

Back To Top