Quantum AI is still years from enterprise prime time

Quantum computing’s greatest potential for widespread adoption during this decade is in artificial intelligence

Quantum AI is still years from enterprise prime time

Quantum computing’s potential to revolutionize AI depends on growth of a developer ecosystem in which suitable tools, skills, and platforms are in abundance. To be considered ready for enterprise production deployment, the quantum AI industry would have to, at the very least, reach the following key milestones:

  • Find a compelling application for which quantum computing has a clear advantage over classical approaches to building and training AI.
  • Converge on a widely adopted open source framework for building, training, and deploying quantum AI.
  • Build a substantial, skilled developer ecosystem of quantum AI applications.

These milestones are all still at least a few years in the future. What follows is an analysis of the quantum AI industry’s maturity at the present time.

Lack of a compelling AI application for which quantum computing has a clear advantage

Quantum AI executes ML (machine learning), DL (deep learning), and other data-driven AI algorithms reasonably well.

As an approach, quantum AI has moved well beyond the proof-of-concept stage. However, that’s not the same as being able to claim that quantum approaches are superior to classical approaches for executing the matrix operations upon which AI’s inferencing and training workloads depend.

 

Where AI is concerned, the key criterion is whether quantum platforms can accelerate ML and DL workloads faster than computers built entirely on classical von Neumann architectures. So far there is no specific AI application that a quantum computer can perform better than any classical alternative. For us to declare quantum AI a mature enterprise technology, there would need to be at least a few AI applications for which it offers a clear advantage—speed, accuracy, efficiency—over classical approaches to processing these workloads.

Nevertheless, pioneers of quantum AI have aligned its functional processing algorithms with the mathematical properties of quantum computing architectures. Currently, the chief algorithmic approaches for quantum AI include:

  • Amplitude encoding: This associates quantum-state amplitudes with the inputs and outputs of computations performed by ML and DL algorithms. Amplitude encoding allows for statistical algorithms that support exponentially compact representation of complex multidimensional variables. It supports matrix inversions in which the training of statistical ML models reduces to solving linear systems of equations, such as those in least-squares linear regressions, least-squares version of support vector machines, and Gaussian processes. It often requires the developer to initialize a quantum system in a state whose amplitudes reflect the features of the entire data set.
  • Amplitude amplification: This uses an algorithm that finds with high probability the unique input to a black box function that produces a particular output value. Amplitude amplification is suitable for those ML algorithms that can be translated into an unstructured search task, such as k-medians and k-nearest neighbors. It can be accelerated through random walk algorithms where randomness comes from stochastic transitions between states, such as in that inherent to quantum superposition of states and the collapse of wave functions due to state measurements.
  • Quantum annealing: This determines the local minima and maxima of a machine-learning function over a given set of candidate functions. It starts from a superposition of all possible, equally weighted states of a quantum ML system. It then applies a linear, partial differential equation to guide the time evolution of the quantum-mechanical system. It eventually yields an instantaneous operator, known as the Hamiltonian, that corresponds to the sum of the kinetic energies plus the potential energies associated with the quantum system’s ground state.

Leveraging these techniques, some current AI implementations use quantum platforms as coprocessors on select calculation workloads, such as autoencoders, GANs (generative adversarial networks), and reinforcement learning agents.

As quantum AI matures, we should expect that these and other algorithmic approaches will show a clear advantage when applied to AI grand challenges that involve complex probabilistic calculations operating over highly multidimensional problem domains and multimodal data sets. Examples of heretofore intractable AI challenges that may yield to quantum-enhanced approaches include neuromorphic cognitive modelsreasoning under uncertaintyrepresentation of complex systemscollaborative problem solvingadaptive machine learning, and training parallelization.

But even as quantum libraries, platforms, and tools prove themselves out for these specific challenges, they will still rely on classical AI algorithms and functions within end-to-end machine learning pipelines.

Lack of a widely adopted open source modeling and training framework

For quantum AI to mature into a robust enterprise technology, there will need to be a dominant framework for developing, training, and deploying these applications. Google’s TensorFlow Quantum is an odds-on favorite in that regard. Announced this past March, TensorFlow Quantum is a new software-only stack that extends the widely adopted TensorFlow open source AI library and modeling framework.

TensorFlow Quantum brings support for a wide range of quantum computing platforms into one of the dominant modeling frameworks used by today’s AI professionals. Developed by Google’s X R&D unit, it enables data scientists to use Python code to develop quantum ML and DL models through standard Keras functions. It also provides a library of quantum circuit simulators and quantum computing primitives that are compatible with existing TensorFlow APIs.

Developers can use TensorFlow Quantum for supervised learning on such AI use cases as quantum classification, quantum control, and quantum approximate optimization. They can execute advanced quantum learning tasks such as meta-learning, Hamiltonian learning, and sampling thermal states. They can use the framework to train hybrid quantum/classical models to handle both the discriminative and generative workloads at the heart of the GANs used in deep fakes, 3D printing, and other advanced AI applications.

Recognizing that quantum computing is not yet mature enough to process the full range of AI workloads with sufficient accuracy, Google designed the framework to support the many AI use cases with one foot in traditional computing architectures. TensorFlow Quantum enables developers to rapidly prototype ML and DL models that hybridize the execution of quantum and classic processors in parallel on learning tasks. Using the tool, developers can build both classical and quantum datasets, with the classical data natively processed by TensorFlow and the quantum extensions processing quantum data, which consists of both quantum circuits and quantum operators.

Google designed TensorFlow Quantum to support advanced research into alternative quantum computing architectures and algorithms for processing ML models. This makes the new offering suitable for computer scientists who are experimenting with different quantum and hybrid processing architectures optimized for ML workloads.

To this end, TensorFlow Quantum incorporates Cirq, an open source Python library for programming quantum computers. It supports programmatic creation, editing, and invoking of the quantum gates that constitute the Noisy Intermediate Scale Quantum (NISQ) circuits characteristic of today’s quantum systems. Cirq enables developer-specified quantum computations to be executed in simulations or on real hardware. It does this by converting quantum computations to tensors for use inside TensorFlow computational graphs. As an integral component of TensorFlow Quantum, Cirq enables quantum circuit simulation and batched circuit execution, as well as estimation of automated expectation and quantum gradients. It also enables developers to build efficient compilers, schedulers, and other algorithms for NISQ machines.

In addition to providing a full AI software stack into which quantum processing can now be hybridized, Google is looking to expand the range of more traditional chip architectures on which TensorFlow Quantum can simulate quantum ML. Google also announced plans to expand the range of custom quantum-simulation hardware platforms supported by the tool to include graphics processing units from various vendors as well as its own Tensor Processing Unit AI-accelerator hardware platforms.

Google’s latest announcement lands in a fast-moving but still immature quantum computing marketplace. By extending the most popular open source AI development framework, Google will almost certainly catalyze use of TensorFlow Quantum in a wide range of AI-related initiatives.

However, TensorFlow Quantum comes into a market that already has several open source quantum-AI development and training tools. Unlike Google’s offering, these rival quantum AI tools come as parts of larger packages of development environments, cloud services, and consulting for standing up full working applications. Here are three full-stack quantum AI offerings:

  •  Azure Quantum, announced in November 2019, is a quantum-computing cloud service. Currently in private preview and due for general availability later this year, Azure Quantum comes with a Microsoft open-sourced Quantum Development Kit for the Microsoft-developed quantum-oriented Q# language as well as Python, C#, and other languages. The kit includes libraries for development of quantum apps in ML, cryptography, optimization, and other domains.
  • Amazon Braket, announced in December 2019 and still in preview, is a fully managed AWS service. It provides a single development environment to build quantum algorithms, including ML, and test them on simulated hybrid quantum/classical computers. It enables developers to run ML and other quantum programs on a range of different hardware architectures. Developers craft quantum algorithms using the Amazon Braket developer toolkit and use familiar tools such as Jupyter notebooks.
  • IBM Quantum Experience is a free, publicly available, cloud-based environment for team exploration of quantum applications. It provides developers with access to advanced quantum computers for learning, developing, training, and running AI and other quantum programs. It includes IBM Qiskit, an open source developer tool with a library of cross-domain quantum algorithms for experimenting with AI, simulation, optimization, and finance applications for quantum computers.

TensorFlow Quantum’s adoption depends on the extent to which these and other quantum AI full-stack vendors incorporate it into their solution portfolios. That seems likely, given the extent to which all these cloud vendors already support TensorFlow in their respective AI stacks.

TensorFlow Quantum won’t necessarily have the quantum AI SDK field all to itself going forward. Other open source AI frameworks—most notably, the Facebook-developed PyTorch—are contending with TensorFlow for the hearts and minds of working data scientists. One expects that rival framework to be extended with quantum AI libraries and tools during the coming 12 to 18 months.

We can catch a glimpse of the emerging multitool quantum AI industry by considering a pioneering vendor in this regard. Xanadu’s PennyLane is an open-source development and training framework for AI, executing over hybrid quantum/classical platforms.

Launched in November 2018, PennyLane is a cross-platform Python library for quantum ML, automatic differentiation, and optimization of hybrid quantum-classical computing platforms. PennyLane enables rapid prototyping and optimization of quantum circuits using existing AI tools, including TensorFlow, PyTorch, and NumPy. It is device-independent, enabling the same quantum circuit model to be run on different software and hardware back ends, including Strawberry FieldsIBM QGoogle CirqRigetti Forest SDKMicrosoft QDK, and ProjectQ.

Lack of a substantial and skilled developer ecosystem

As killer apps and open source frameworks mature, they are sure to catalyze a robust ecosystem of skilled quantum-AI developers who are doing innovative work driving this technology into everyday applications.

Increasingly, we’re seeing the growth of a developer ecosystem for quantum AI. Each of the major quantum AI cloud vendors (Google, Microsoft, Amazon Web Services, and IBM) is investing heavily in enlarging the developer community. Vendor initiatives in this regard include the following:

  • Microsoft plans to integrate its QDK with development tools and Visual Studio so they can be used to build quantum programs for quantum hardware platforms from Honeywell, IonQ, QCI, and others, and also to simulate program performance on these and other platforms.
  • AWS’ offering enables scientists, researchers, and developers to begin experimenting with computers from quantum hardware providers (including D-Wave, IonQ, and Rigetti) in a single place. It allows users to explore, evaluate, and experiment with quantum computing hardware to gain in-house experience as they plan for the future.
  • IBM recently announced the expansion of Q Network, its three-year-old quantum developer ecosystem, under which more than 200,000 users are running hundreds of billions of executions on IBM’s quantum systems and simulators through IBM Quantum Experience. Participants in the network have access to Qiskit, to IBM’s quantum expertise and resources, and to cloud-based access to the IBM Quantum Computation Center. Many of the workloads being run include AI, as well as real-time simulations of quantum computing architectures.

In addition, enterprise quantum computing industry vendors such as D-WaveBaiduAmberFluxCogniFrame, and Honeywell generally have consulting offerings geared at building the development ecosystem of partners and customers.

In the development of a tool- and platform-agnostic quantum AI developer ecosystem, Creative Destruction Lab is a key catalyst. Its Quantum Incubator Stream brings together entrepreneurs, investors, scientists in quantum technologies, and quantum hardware vendors to build ventures in the nascent domain of quantum computing, ML, optimization, sensing and other applications of quantum technologies. It provides quantum computing resources from D-Wave Systems (access to the latest D-Wave system and software libraries), IBM (access and hands-on technical support for the public IBM Q Experience systems and Qiskit tool), Rigetti (Rigetti Forest programming environment, with access to cloud-connected superconducting quantum processors and Quantum Virtual Machine), and Xanadu (Strawberry Fields, an open source library for photonic quantum computing, with a suite of simulators for execution on CPU/GPU, and access to Xanadu’s cloud-based quantum photonic chips).

Recommendations

The quantum AI market remains far from enterprise prime time deployment, but it has started to climb that maturity curve.

At the very least, the quantum AI industry will need to attain the milestones highlighted above to be considered fully mature: a consensus compelling app, a widely adopted open source development environment, and a broad development ecosystem. These maturity milestones have already been attained by leading AI tools that support modeling and training on purely classical computing architectures. We expect to see the market for hybrid quantum/classical AI mature to this point within the next three to five years.

 

The quantum AI market’s immaturity should not deter data scientists and other developers from exploring the technology today for proofs of concept, pilot projects, and even some production deployments. In this regard, we provide the following strategic recommendations.

To get ahead of the curve on quantum AI, application developers and data scientists should adopt solutions that leverage hybrid quantum/classical computing platforms. They should deploy quantum platforms as coprocessors not as outright replacements to handle specific AI workloads, such as autoencoders, GANs, and reinforcement learning agents. In addition, they should integrate investments in quantum-enhanced AI tools with legacy AI modeling and training platforms. They should also apply quantum AI tools to neuromorphic cognitive models, adaptive machine learning, training parallelization, and other advanced projects to identify workloads on which these solutions offer a clear advantage over classical computing platforms.

To position themselves for this growing opportunity, IT solution providers should expand their professional services offerings and partnerships in order to train the next-generation development ecosystem for quantum AI. They should integrate their quantum AI development environments with the widely adopted open source AI frameworks, most notably TensorFlow (especially the new TensorFlow Quantum) and PyTorch. Also, they should build more automated ML features into their quantum AI tools to simplify and accelerate the data preparation, model development, training, and deployment of quantum AI applications. They should align their quantum AI libraries, software, and services with leading data-science pipeline management, devops, and multicloud environments in order to pave the way for future production deployments of quantum-enhanced AI applications.

Market investors should place their bets with any providers of quantum-enhanced AI solutions who are building the tools for broad enterprise deployment of these capabilities over the next several years. Specifically, the priority should be on funding startups that follow Xanadu’s lead in providing framework-agnostic Python libraries for rapid prototyping of quantum AI applications to run on diverse software and hardware back ends.

Contact Information:

James Kobielus

Bilal