Number
int64
1
7.61k
Text
stringlengths
2
3.11k
801
Neuromorphic quantum computing is an unconventional computing type of computing that uses neuromorphic computing to perform quantum operations. It was suggested that quantum algorithms, which are algorithms that run on a realistic model of quantum computation, can be computed equally efficiently with neuromorphic quantum computing. Both, traditional quantum computing and neuromorphic quantum computing are physics-based unconventional computing approaches to computations and don’t follow the von Neumann architecture. They both construct a system that represents the physical problem at hand, and then leverage their respective physics properties of the system to seek the “minimum”. Neuromorphic quantum computing and quantum computing share similar physical properties during computation.
802
A topological quantum computer decomposes computation into the braiding of anyons in a 2D lattice.
803
A quantum Turing machine is the quantum analog of a Turing machine. All of these models of computation—quantum circuits, one-way quantum computation, adiabatic quantum computation, and topological quantum computation—have been shown to be equivalent to the quantum Turing machine; given a perfect implementation of one such quantum computer, it can simulate all the others with no more than polynomial overhead. This equivalence need not hold for practical quantum computers, since the overhead of simulation may be too large to be practical.
804
Quantum computing has significant potential applications in the fields of cryptography and cybersecurity. Quantum cryptography, which relies on the principles of quantum mechanics, offers the possibility of secure communication channels that are resistant to eavesdropping. Quantum key distribution protocols, such as BB84, enable the secure exchange of cryptographic keys between parties, ensuring the confidentiality and integrity of communication. Moreover, quantum random number generators can produce high-quality random numbers, which are essential for secure encryption.
805
However, quantum computing also poses challenges to traditional cryptographic systems. Shor's algorithm, a quantum algorithm for integer factorization, could potentially break widely used public-key cryptography schemes like RSA, which rely on the difficulty of factoring large numbers. Post-quantum cryptography, which involves the development of cryptographic algorithms that are resistant to attacks by both classical and quantum computers, is an active area of research aimed at addressing this concern.
806
Ongoing research in quantum cryptography and post-quantum cryptography is crucial for ensuring the security of communication and data in the face of evolving quantum computing capabilities. Advances in these fields, such as the development of new QKD protocols, the improvement of QRNGs, and the standardization of post-quantum cryptographic algorithms, will play a key role in maintaining the integrity and confidentiality of information in the quantum era.
807
Quantum cryptography enables new ways to transmit data securely; for example, quantum key distribution uses entangled quantum states to establish secure cryptographic keys. When a sender and receiver exchange quantum states, they can guarantee that an adversary does not intercept the message, as any unauthorized eavesdropper would disturb the delicate quantum system and introduce a detectable change. With appropriate cryptographic protocols, the sender and receiver can thus establish shared private information resistant to eavesdropping.
808
Modern fiber-optic cables can transmit quantum information over relatively short distances. Ongoing experimental research aims to develop more reliable hardware , hoping to scale this technology to long-distance quantum networks with end-to-end entanglement. Theoretically, this could enable novel technological applications, such as distributed quantum computing and enhanced quantum sensing.
809
Progress in finding quantum algorithms typically focuses on this quantum circuit model, though exceptions like the quantum adiabatic algorithm exist. Quantum algorithms can be roughly categorized by the type of speedup achieved over corresponding classical algorithms.
810
Quantum algorithms that offer more than a polynomial speedup over the best-known classical algorithm include Shor's algorithm for factoring and the related quantum algorithms for computing discrete logarithms, solving Pell's equation, and more generally solving the hidden subgroup problem for abelian finite groups. These algorithms depend on the primitive of the quantum Fourier transform. No mathematical proof has been found that shows that an equally fast classical algorithm cannot be discovered, but evidence suggests that this is unlikely. Certain oracle problems like Simon's problem and the Bernstein–Vazirani problem do give provable speedups, though this is in the quantum query model, which is a restricted model where lower bounds are much easier to prove and doesn't necessarily translate to speedups for practical problems.
811
Other problems, including the simulation of quantum physical processes from chemistry and solid-state physics, the approximation of certain Jones polynomials, and the quantum algorithm for linear systems of equations have quantum algorithms appearing to give super-polynomial speedups and are BQP-complete. Because these problems are BQP-complete, an equally fast classical algorithm for them would imply that no quantum algorithm gives a super-polynomial speedup, which is believed to be unlikely.
812
Some quantum algorithms, like Grover's algorithm and amplitude amplification, give polynomial speedups over corresponding classical algorithms. Though these algorithms give comparably modest quadratic speedup, they are widely applicable and thus give speedups for a wide range of problems.
813
Since chemistry and nanotechnology rely on understanding quantum systems, and such systems are impossible to simulate in an efficient manner classically, quantum simulation may be an important application of quantum computing. Quantum simulation could also be used to simulate the behavior of atoms and particles at unusual conditions such as the reactions inside a collider. In June 2023, IBM computer scientists reported that a quantum computer produced better results for a physics problem than a conventional supercomputer.
814
About 2% of the annual global energy output is used for nitrogen fixation to produce ammonia for the Haber process in the agricultural fertilizer industry . Quantum simulations might be used to understand this process and increase the energy efficiency of production. It is expected that an early use of quantum computing will be modeling that improves the efficiency of the Haber–Bosch process by the mid 2020s although some have predicted it will take longer.
815
A notable application of quantum computation is for attacks on cryptographic systems that are currently in use. Integer factorization, which underpins the security of public key cryptographic systems, is believed to be computationally infeasible with an ordinary computer for large integers if they are the product of few prime numbers . By comparison, a quantum computer could solve this problem exponentially faster using Shor's algorithm to find its factors. This ability would allow a quantum computer to break many of the cryptographic systems in use today, in the sense that there would be a polynomial time algorithm for solving the problem. In particular, most of the popular public key ciphers are based on the difficulty of factoring integers or the discrete logarithm problem, both of which can be solved by Shor's algorithm. In particular, the RSA, Diffie–Hellman, and elliptic curve Diffie–Hellman algorithms could be broken. These are used to protect secure Web pages, encrypted email, and many other types of data. Breaking these would have significant ramifications for electronic privacy and security.
816
Identifying cryptographic systems that may be secure against quantum algorithms is an actively researched topic under the field of post-quantum cryptography. Some public-key algorithms are based on problems other than the integer factorization and discrete logarithm problems to which Shor's algorithm applies, like the McEliece cryptosystem based on a problem in coding theory. Lattice-based cryptosystems are also not known to be broken by quantum computers, and finding a polynomial time algorithm for solving the dihedral hidden subgroup problem, which would break many lattice based cryptosystems, is a well-studied open problem. It has been proven that applying Grover's algorithm to break a symmetric algorithm by brute force requires time equal to roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with roughly 2n in the classical case, meaning that symmetric key lengths are effectively halved: AES-256 would have the same security against an attack using Grover's algorithm that AES-128 has against classical brute-force search .
817
There exists a boolean function that evaluates each input and determines whether it is the correct answer.
818
For problems with all these properties, the running time of Grover's algorithm on a quantum computer scales as the square root of the number of inputs , as opposed to the linear scaling of classical algorithms. A general class of problems to which Grover's algorithm can be applied is a Boolean satisfiability problem, where the database through which the algorithm iterates is that of all possible answers. An example and possible application of this is a password cracker that attempts to guess a password. Breaking symmetric ciphers with this algorithm is of interest to government agencies.
819
Quantum annealing relies on the adiabatic theorem to undertake calculations. A system is placed in the ground state for a simple Hamiltonian, which slowly evolves to a more complicated Hamiltonian whose ground state represents the solution to the problem in question. The adiabatic theorem states that if the evolution is slow enough the system will stay in its ground state at all times through the process. Adiabatic optimization may be helpful for solving computational biology problems.
820
Since quantum computers can produce outputs that classical computers cannot produce efficiently, and since quantum computation is fundamentally linear algebraic, some express hope in developing quantum algorithms that can speed up machine learning tasks.
821
For example, the HHL Algorithm, named after its discoverers Harrow, Hassidim, and Lloyd, is believed to provide speedup over classical counterparts. Some research groups have recently explored the use of quantum annealing hardware for training Boltzmann machines and deep neural networks.
822
Deep generative chemistry models emerge as powerful tools to expedite drug discovery. However, the immense size and complexity of the structural space of all possible drug-like molecules pose significant obstacles, which could be overcome in the future by quantum computers. Quantum computers are naturally good for solving complex quantum many-body problems and thus may be instrumental in applications involving quantum chemistry. Therefore, one can expect that quantum-enhanced generative models including quantum GANs may eventually be developed into ultimate generative chemistry algorithms.
823
As of 2023, classical computers outperform quantum computers for all real-world applications. While current quantum computers may speed up solutions to particular mathematical problems, they give no computational advantage for practical tasks. For many tasks there is no promise of useful quantum speedup, and some tasks provably prohibit any quantum speedup in the sense that any speedup is ruled out by proven theorems. Scientists and engineers are exploring multiple technologies for quantum computing hardware and hope to develop scalable quantum architectures, but serious obstacles remain.
824
There are a number of technical challenges in building a large-scale quantum computer. Physicist David DiVincenzo has listed these requirements for a practical quantum computer:
825
Sourcing parts for quantum computers is also very difficult. Superconducting quantum computers, like those constructed by Google and IBM, need helium-3, a nuclear research byproduct, and special superconducting cables made only by the Japanese company Coax Co.
826
The control of multi-qubit systems requires the generation and coordination of a large number of electrical signals with tight and deterministic timing resolution. This has led to the development of quantum controllers that enable interfacing with the qubits. Scaling these systems to support a growing number of qubits is an additional challenge.
827
One of the greatest challenges involved with constructing quantum computers is controlling or removing quantum decoherence. This usually means isolating the system from its environment as interactions with the external world cause the system to decohere. However, other sources of decoherence also exist. Examples include the quantum gates, and the lattice vibrations and background thermonuclear spin of the physical system used to implement the qubits. Decoherence is irreversible, as it is effectively non-unitary, and is usually something that should be highly controlled, if not avoided. Decoherence times for candidate systems in particular, the transverse relaxation time T2 , typically range between nanoseconds and seconds at low temperature. Currently, some quantum computers require their qubits to be cooled to 20 millikelvin in order to prevent significant decoherence. A 2020 study argues that ionizing radiation such as cosmic rays can nevertheless cause certain systems to decohere within milliseconds.
828
As a result, time-consuming tasks may render some quantum algorithms inoperable, as attempting to maintain the state of qubits for a long enough duration will eventually corrupt the superpositions.
829
These issues are more difficult for optical approaches as the timescales are orders of magnitude shorter and an often-cited approach to overcoming them is optical pulse shaping. Error rates are typically proportional to the ratio of operating time to decoherence time, hence any operation must be completed much more quickly than the decoherence time.
830
As described by the threshold theorem, if the error rate is small enough, it is thought to be possible to use quantum error correction to suppress errors and decoherence. This allows the total calculation time to be longer than the decoherence time if the error correction scheme can correct errors faster than decoherence introduces them. An often-cited figure for the required error rate in each gate for fault-tolerant computation is 10−3, assuming the noise is depolarizing.
831
Meeting this scalability condition is possible for a wide range of systems. However, the use of error correction brings with it the cost of a greatly increased number of required qubits. The number required to factor integers using Shor's algorithm is still polynomial, and thought to be between L and L2, where L is the number of digits in the number to be factored; error correction algorithms would inflate this figure by an additional factor of L. For a 1000-bit number, this implies a need for about 104 bits without error correction. With error correction, the figure would rise to about 107 bits. Computation time is about L2 or about 107 steps and at 1 MHz, about 10 seconds. However, the encoding and error-correction overheads increase the size of a real fault-tolerant quantum computer by several orders of magnitude. Careful estimates show that at least 3 million physical qubits would factor 2,048-bit integer in 5 months on a fully error-corrected trapped-ion quantum computer. In terms of the number of physical qubits, to date, this remains the lowest estimate for practically useful integer factorization problem sizing 1,024-bit or larger.
832
Another approach to the stability-decoherence problem is to create a topological quantum computer with anyons, quasi-particles used as threads, and relying on braid theory to form stable logic gates.
833
Physicist John Preskill coined the term quantum supremacy to describe the engineering feat of demonstrating that a programmable quantum device can solve a problem beyond the capabilities of state-of-the-art classical computers. The problem need not be useful, so some view the quantum supremacy test only as a potential future benchmark.
834
In October 2019, Google AI Quantum, with the help of NASA, became the first to claim to have achieved quantum supremacy by performing calculations on the Sycamore quantum computer more than 3,000,000 times faster than they could be done on Summit, generally considered the world's fastest computer. This claim has been subsequently challenged: IBM has stated that Summit can perform samples much faster than claimed, and researchers have since developed better algorithms for the sampling problem used to claim quantum supremacy, giving substantial reductions to the gap between Sycamore and classical supercomputers and even beating it.
835
In December 2020, a group at USTC implemented a type of Boson sampling on 76 photons with a photonic quantum computer, Jiuzhang, to demonstrate quantum supremacy. The authors claim that a classical contemporary supercomputer would require a computational time of 600 million years to generate the number of samples their quantum processor can generate in 20 seconds.
836
Claims of quantum supremacy have generated hype around quantum computing, but they are based on contrived benchmark tasks that do not directly imply useful real-world applications.
837
In January 2024, a study published in Physical Review Letters provided direct verification of quantum supremacy experiments by computing exact amplitudes for experimentally generated bitstrings using a new-generation Sunway supercomputer, demonstrating a significant leap in simulation capability built on a multiple-amplitude tensor network contraction algorithm. This development underscores the evolving landscape of quantum computing, highlighting both the progress and the complexities involved in validating quantum supremacy claims.
838
Despite high hopes for quantum computing, significant progress in hardware, and optimism about future applications, a 2023 Nature spotlight article summarised current quantum computers as being "For now, absolutely nothing". The article elaborated that quantum computers are yet to be more useful or efficient than conventional computers in any case, though it also argued that in the long term such computers are likely to be useful. A 2023 Communications of the ACM article found that current quantum computing algorithms are "insufficient for practical quantum advantage without significant improvements across the software/hardware stack". It argues that the most promising candidates for achieving speedup with quantum computers are "small-data problems", for example in chemistry and materials science. However, the article also concludes that a large range of the potential applications it considered, such as machine learning, "will not achieve quantum advantage with current quantum algorithms in the foreseeable future", and it identified I/O constraints that make speedup unlikely for "big data problems, unstructured linear systems, and database search based on Grover's algorithm".
839
This state of affairs can be traced to several current and long-term considerations.
840
In particular, building computers with large numbers of qubits may be futile if those qubits are not connected well enough and cannot maintain sufficiently high degree of entanglement for long time. When trying to outperform conventional computers, quantum computing researchers often look for new tasks that can be solved on quantum computers, but this leaves the possibility that efficient non-quantum techniques will be developed in response, as seen for Quantum supremacy demonstrations. Therefore, it is desirable to prove lower bounds on the complexity of best possible non-quantum algorithms and show that some quantum algorithms asymptomatically improve upon those bounds.
841
Some researchers have expressed skepticism that scalable quantum computers could ever be built, typically because of the issue of maintaining coherence at large scales, but also for other reasons.
842
Bill Unruh doubted the practicality of quantum computers in a paper published in 1994. Paul Davies argued that a 400-qubit computer would even come into conflict with the cosmological information bound implied by the holographic principle. Skeptics like Gil Kalai doubt that quantum supremacy will ever be achieved. Physicist Mikhail Dyakonov has expressed skepticism of quantum computing as follows:
843
A practical quantum computer must use a physical system as a programmable quantum register. Researchers are exploring several technologies as candidates for reliable qubit implementations. Superconductors and trapped ions are some of the most developed proposals, but experimentalists are considering other hardware possibilities as well.
844
Any computational problem solvable by a classical computer is also solvable by a quantum computer. Intuitively, this is because it is believed that all physical phenomena, including the operation of classical computers, can be described using quantum mechanics, which underlies the operation of quantum computers.
845
Conversely, any problem solvable by a quantum computer is also solvable by a classical computer. It is possible to simulate both quantum and classical computers manually with just some paper and a pen, if given enough time. More formally, any quantum computer can be simulated by a Turing machine. In other words, quantum computers provide no additional power over classical computers in terms of computability. This means that quantum computers cannot solve undecidable problems like the halting problem, and the existence of quantum computers does not disprove the Church–Turing thesis.
846
While quantum computers cannot solve any problems that classical computers cannot already solve, it is suspected that they can solve certain problems faster than classical computers. For instance, it is known that quantum computers can efficiently factor integers, while this is not believed to be the case for classical computers.
847
The class of problems that can be efficiently solved by a quantum computer with bounded error is called BQP, for "bounded error, quantum, polynomial time". More formally, BQP is the class of problems that can be solved by a polynomial-time quantum Turing machine with an error probability of at most 1/3. As a class of probabilistic problems, BQP is the quantum counterpart to BPP , the class of problems that can be solved by polynomial-time probabilistic Turing machines with bounded error. It is known that B P P ⊆ B Q P {\displaystyle {\mathsf {BPP\subseteq BQP}}} and is widely suspected that B Q P ⊊ B P P {\displaystyle {\mathsf {BQP\subsetneq BPP}}} , which intuitively would mean that quantum computers are more powerful than classical computers in terms of time complexity.
848
The exact relationship of BQP to P, NP, and PSPACE is not known. However, it is known that P ⊆ B Q P ⊆ P S P A C E {\displaystyle {\mathsf {P\subseteq BQP\subseteq PSPACE}}} ; that is, all problems that can be efficiently solved by a deterministic classical computer can also be efficiently solved by a quantum computer, and all problems that can be efficiently solved by a quantum computer can also be solved by a deterministic classical computer with polynomial space resources. It is further suspected that BQP is a strict superset of P, meaning there are problems that are efficiently solvable by quantum computers that are not efficiently solvable by deterministic classical computers. For instance, integer factorization and the discrete logarithm problem are known to be in BQP and are suspected to be outside of P. On the relationship of BQP to NP, little is known beyond the fact that some NP problems that are believed not to be in P are also in BQP . It is suspected that N P ⊈ B Q P {\displaystyle {\mathsf {NP\nsubseteq BQP}}} ; that is, it is believed that there are efficiently checkable problems that are not efficiently solvable by a quantum computer. As a direct consequence of this belief, it is also suspected that BQP is disjoint from the class of NP-complete problems .
849
The concept of wetware is an application of specific interest to the field of computer manufacturing. Moore's law, which states that the number of transistors which can be placed on a silicon chip is doubled roughly every two years, has acted as a goal for the industry for decades, but as the size of computers continues to decrease, the ability to meet this goal has become more difficult, threatening to reach a plateau. Due to the difficulty in reducing the size of computers because of size limitations of transistors and integrated circuits, wetware provides an unconventional alternative. A wetware computer composed of neurons is an ideal concept because, unlike conventional materials which operate in binary , a neuron can shift between thousands of states, constantly altering its chemical conformation, and redirecting electrical pulses through over 200,000 channels in any of its many synaptic connections. Because of this large difference in the possible settings for any one neuron, compared to the binary limitations of conventional computers, the space limitations are far fewer.
850
The concept of wetware is distinct and unconventional and draws slight resonance with both hardware and software from conventional computers. While hardware is understood as the physical architecture of traditional computational devices, built from electrical circuitry and silicone plates, software represents the encoded architecture of storage and instructions. Wetware is a separate concept that uses the formation of organic molecules, mostly complex cellular structures , to create a computational device such as a computer. In wetware, the ideas of hardware and software are intertwined and interdependent. The molecular and chemical composition of the organic or biological structure would represent not only the physical structure of the wetware but also the software, being continually reprogrammed by the discrete shifts in electrical pulses and chemical concentration gradients as the molecules change their structures to communicate signals. The responsiveness of a cell, proteins, and molecules to changing conformations, both within their structures and around them, ties the idea of internal programming and external structure together in a way that is alien to the current model of conventional computer architecture.
851
The structure of wetware represents a model where the external structure and internal programming are interdependent and unified; meaning that changes to the programming or internal communication between molecules of the device would represent a physical change in the structure. The dynamic nature of wetware borrows from the function of complex cellular structures in biological organisms. The combination of “hardware” and “software” into one dynamic, and interdependent system which uses organic molecules and complexes to create an unconventional model for computational devices is a specific example of applied biorobotics.
852
Cells in many ways can be seen as their form of naturally occurring wetware, similar to the concept that the human brain is the preexisting model system for complex wetware. In his book Wetware: A Computer in Every Living Cell Dennis Bray explains his theory that cells, which are the most basic form of life, are just a highly complex computational structure, like a computer. To simplify one of his arguments a cell can be seen as a type of computer, using its structured architecture. In this architecture, much like a traditional computer, many smaller components operate in tandem to receive input, process the information, and compute an output. In an overly simplified, non-technical analysis, cellular function can be broken into the following components: Information and instructions for execution are stored as DNA in the cell, RNA acts as a source for distinctly encoded input, processed by ribosomes and other transcription factors to access and process the DNA and to output a protein. Bray's argument in favor of viewing cells and cellular structures as models of natural computational devices is important when considering the more applied theories of wetware to biorobotics.
853
Wetware and biorobotics are closely related concepts, which both borrow from similar overall principles. A biorobotic structure can be defined as a system modeled from a preexisting organic complex or model such as cells or more complex structures like organs or whole organisms. Unlike wetware the concept of biorobotics is not always a system composed of organic molecules, but instead could be composed of conventional material which is designed and assembled in a structure similar or derived from a biological model. Biorobotics have many applications and are used to address the challenges of conventional computer architecture. Conceptually, designing a program, robot, or computational device after a preexisting biological model such as a cell, or even a whole organism, provides the engineer or programmer the benefits of incorporating into the structure the evolutionary advantages of the model.
854
In 1999 William Ditto and his team of researchers at Georgia Institute of Technology and Emory University created a basic form of a wetware computer capable of simple addition by harnessing leech neurons. Leeches were used as a model organism due to the large size of their neuron, and the ease associated with their collection and manipulation. However, these results have never been published in a peer-reviewed journal, prompting questions about the validity of the claims. The computer was able to complete basic addition through electrical probes inserted into the neuron. The manipulation of electrical currents through neurons was not a trivial accomplishment, however. Unlike conventional computer architecture, which is based on the binary on/off states, neurons are capable of existing in thousands of states and communicate with each other through synaptic connections which each contain over 200,000 channels. Each can be dynamically shifted in a process called self-organization to constantly form and reform new connections. A conventional computer program called the dynamic clamp was written by Eve Marder, a neurobiologist at Brandeis University that was capable of reading the electrical pulses from the neurons in real time and interpreting them. This program was used to manipulate the electrical signals being input into the neurons to represent numbers and to communicate with each other to return the sum. While this computer is a very basic example of a wetware structure it represents a small example with fewer neurons than found in a more complex organ. It is thought by Ditto that by increasing the number of neurons present the chaotic signals sent between them will self-organize into a more structured pattern, such as the regulation of heart neurons into a constant heartbeat found in humans and other living organisms.
855
After his work creating a basic computer from leech neurons, Ditto continued to work not only with organic molecules and wetware but also on the concept of applying the chaotic nature of biological systems and organic molecules to conventional material and logic gates. Chaotic systems have advantages for generating patterns and computing higher-order functions like memory, arithmetic logic, and input/output operations. In his article Construction of a Chaotic Computer Chip Ditto discusses the advantages in programming of using chaotic systems, with their greater sensitivity to respond and reconfigure logic gates in his conceptual chaotic chip. The main difference between a chaotic computer chip and a conventional computer chip is the reconfigurability of the chaotic system. Unlike a traditional computer chip, where a programmable gate array element must be reconfigured through the switching of many single-purpose logic gates, a chaotic chip can reconfigure all logic gates through the control of the pattern generated by the non-linear chaotic element.
856
Cognitive biology evaluates cognition as a basic biological function. W. Tecumseh Fitch, a professor of cognitive biology at the University of Vienna, is a leading theorist on ideas of cellular intentionality. The idea is that not only do whole organisms have a sense of "aboutness" of intentionality, but that single cells also carry a sense of intentionality through cells' ability to adapt and reorganize in response to certain stimuli. Fitch discusses the idea of nano-intentionality, specifically in regards to neurons, in their ability to adjust rearrangements to create neural networks. He discusses the ability of cells such as neurons to respond independently to stimuli such as damage to be what he considers "intrinsic intentionality" in cells, explaining that "hile at a vastly simpler level than intentionality at the human cognitive level, I propose that this basic capacity of living things provides the necessary building blocks for cognition and higher-order intentionality." Fitch describes the value of his research to specific areas of computer science such as artificial intelligence and computer architecture. He states "If a researcher aims to make a conscious machine, doing it with rigid switches is barking up the wrong tree." Fitch believes that an important aspect of the development of areas such as artificial intelligence is wetware with nano-intentionally, and autonomous ability to adapt and restructure itself.
857
In a review of the above-mentioned research conducted by Fitch, Daniel Dennett, a professor at Tufts University, discusses the importance of the distinction between the concept of hardware and software when evaluating the idea of wetware and organic material such as neurons. Dennett discusses the value of observing the human brain as a preexisting example of wetware. He sees the brain as having "the competence of a silicon computer to take on an unlimited variety of temporary cognitive roles." Dennett disagrees with Fitch on certain areas, such as the relationship of software/hardware versus wetware, and what a machine with wetware might be capable of. Dennett highlights the importance of additional research into human cognition to better understand the intrinsic mechanism by which the human brain can operate, to better create an organic computer.
858
Brain-on-a-chip devices have been developed that are "aimed at testing and predicting the effects of biological and chemical agents, disease or pharmaceutical drugs on the brain over time". Wetware computers may be useful for research about brain diseases and brain health/capacities , for drug discovery, for testing genome edits and research about brain aging.
859
Wetware computers may have substantial ethical implications, for instance related to possible potentials to sentience and suffering and dual-use technology.
860
Moreover, in some cases the human brain itself may be connected as a kind of "wetware" to other information technology systems which may also have large social and ethical implications, including issues related to intimate access to people's brains. For example, in 2021 Chile became the first country to approve neurolaw that establishes rights to personal identity, free will and mental privacy.
861
The concept of artificial insects may raise substantial ethical questions, including questions related to the decline in insect populations.
862
It is an open question whether human cerebral organoids could develop a degree or form of consciousness. Whether or how it could acquire its moral status with related rights and limits may also be potential future questions. There is research on how consciousness could be detected. As cerebral organoids may acquire human brain-like neural function subjective experience and consciousness may be feasible. Moreover, it may be possible that they acquire such upon transplantation into animals. A study notes that it may, in various cases, be morally permissible "to create self-conscious animals by engrafting human cerebral organoids, but in the case, the moral status of such animals should be carefully considered".
863
While there have been few major developments in the creation of an organic computer since the neuron-based calculator developed by Ditto in the 1990s, research continues to push the field forward, and in 2023 a functioning computer was constructed by researchers at the University of Illinois Urbana-Champaign using 80,000 mouse neurons as processor that can detect light and electrical signals. Projects such as the modeling of chaotic pathways in silicon chips by Ditto have made discoveries in ways of organizing traditional silicon chips and structuring computer architecture to be more efficient and better structured. Ideas emerging from the field of cognitive biology also help to continue to push discoveries in ways of structuring systems for artificial intelligence, to better imitate preexisting systems in humans.
864
In a proposed fungal computer using basidiomycetes, information is represented by spikes of electrical activity, a computation is implemented in a mycelium network, and an interface is realized via fruit bodies.
865
Connecting cerebral organoids with other nerve tissues may become feasible in the future, as is the connection of physical artificial neurons and the control of muscle tissue. External modules of biological tissue could trigger parallel trains of stimulation back into the brain. All-organic devices could be advantageous because it could be biocompatible which may allow it to be implanted into the human body. This may enable treatments of certain diseases and injuries to the nervous system.
866
Three companies are focusing specifically on wetware computing using living neurons:
867
Leonard Adleman of the University of Southern California initially developed this field in 1994. Adleman demonstrated a proof-of-concept use of DNA as a form of computation which solved the seven-point Hamiltonian path problem. Since the initial Adleman experiments, advances have occurred and various Turing machines have been proven to be constructible.
868
Since then the field has expanded into several avenues. In 1995, the idea for DNA-based memory was proposed by Eric Baum who conjectured that a vast amount of data can be stored in a tiny amount of DNA due to its ultra-high density. This expanded the horizon of DNA computing into the realm of memory technology although the in vitro demonstrations were made almost after a decade.
869
The field of DNA computing can be categorized as a sub-field of the broader DNA nanoscience field started by Ned Seeman about a decade before Len Adleman's demonstration. Ned's original idea in the 1980s was to build arbitrary structures using bottom-up DNA self-assembly for applications in crystallography. However, it morphed into the field of structural DNA self-assembly which as of 2020 is extremely sophisticated. Self-assembled structure from a few nanometers tall all the way up to several tens of micrometers in size have been demonstrated in 2018.
870
In 1994, Prof. Seeman's group demonstrated early DNA lattice structures using a small set of DNA components. While the demonstration by Adleman showed the possibility of DNA-based computers, the DNA design was trivial because as the number of nodes in a graph grows, the number of DNA components required in Adleman's implementation would grow exponentially. Therefore, computer scientists and biochemists started exploring tile-assembly where the goal was to use a small set of DNA strands as tiles to perform arbitrary computations upon growth. Other avenues that were theoretically explored in the late 90's include DNA-based security and cryptography, computational capacity of DNA systems, DNA memories and disks, and DNA-based robotics.
871
Before 2002, Lila Kari showed that the DNA operations performed by genetic recombination in some organisms are Turing complete.
872
In 2003, John Reif's group first demonstrated the idea of a DNA-based walker that traversed along a track similar to a line follower robot. They used molecular biology as a source of energy for the walker. Since this first demonstration, a wide variety of DNA-based walkers have been demonstrated.
873
In 1994 Leonard Adleman presented the first prototype of a DNA computer. The TT-100 was a test tube filled with 100 microliters of a DNA solution. He managed to solve an instance of the directed Hamiltonian path problem. In Adleman's experiment, the Hamiltonian Path Problem was implemented notationally as the "travelling salesman problem". For this purpose, different DNA fragments were created, each one of them representing a city that had to be visited. Every one of these fragments is capable of a linkage with the other fragments created. These DNA fragments were produced and mixed in a test tube. Within seconds, the small fragments form bigger ones, representing the different travel routes. Through a chemical reaction, the DNA fragments representing the longer routes were eliminated. The remains are the solution to the problem, but overall, the experiment lasted a week. However, current technical limitations prevent the evaluation of the results. Therefore, the experiment isn't suitable for the application, but it is nevertheless a proof of concept.
874
First results to these problems were obtained by Leonard Adleman.
875
In 2002, J. Macdonald, D. Stefanović and M. Stojanović created a DNA computer able to play tic-tac-toe against a human player. The calculator consists of nine bins corresponding to the nine squares of the game. Each bin contains a substrate and various combinations of DNA enzymes. The substrate itself is composed of a DNA strand onto which was grafted a fluorescent chemical group at one end, and the other end, a repressor group. Fluorescence is only active if the molecules of the substrate are cut in half. The DNA enzymes simulate logical functions. For example, such a DNA will unfold if two specific types of DNA strand are introduced to reproduce the logic function AND.
876
By default, the computer is considered to have played first in the central square. The human player starts with eight different types of DNA strands corresponding to the eight remaining boxes that may be played. To play box number i, the human player pours into all bins the strands corresponding to input #i. These strands bind to certain DNA enzymes present in the bins, resulting, in one of these bins, in the deformation of the DNA enzymes which binds to the substrate and cuts it. The corresponding bin becomes fluorescent, indicating which box is being played by the DNA computer. The DNA enzymes are divided among the bins in such a way as to ensure that the best the human player can achieve is a draw, as in real tic-tac-toe.
877
Kevin Cherry and Lulu Qian at Caltech developed a DNA-based artificial neural network that can recognize 100-bit hand-written digits. They achieve this by programming on computer in advance with appropriate set of weights represented by varying concentrations weight molecules which will later be added to the test tube that holds the input DNA strands.
878
One of the challenges of DNA computing is its speed. While DNA as a substrate is biologically compatible i.e. it can be used at places where silicon technology cannot, its computation speed is still very slow. For example, the square-root circuit used as a benchmark in field took over 100 hours to complete. While newer ways with external enzyme sources are reporting faster and more compact circuits, Chatterjee et al. demonstrated an interesting idea in the field to speed up computation through localized DNA circuits, a concept being further explored by other groups. This idea, while originally proposed in the field of computer architecture, has been adopted in this field as well. In computer architecture, it is very well-known that if the instructions are executed in sequence, having them loaded in the cache will inevitably lead to fast performance, also called the principle of localization. This is because with instructions in fast cache memory, there is no need swap them in and out of main memory, which can be slow. Similarly, in localized DNA computing, the DNA strands responsible for computation are fixed on a breadboard-like substrate ensuring physical proximity of the computing gates. Such localized DNA computing techniques have shown to potentially reduce the computation time by orders of magnitude.
879
Subsequent research on DNA computing has produced reversible DNA computing, bringing the technology one step closer to the silicon-based computing used in PCs. In particular, John Reif and his group at Duke University have proposed two different techniques to reuse the computing DNA complexes. The first design uses dsDNA gates, while the second design uses DNA hairpin complexes. While both the designs face some issues , this appears to represent a significant breakthrough in the field of DNA computing. Some other groups have also attempted to address the gate reusability problem.
880
Using strand displacement reactions , reversible proposals are presented in the "Synthesis Strategy of Reversible Circuits on DNA Computers" paper for implementing reversible gates and circuits on DNA computers by combining DNA computing and reversible computing techniques. This paper also proposes a universal reversible gate library for synthesizing n-bit reversible circuits on DNA computers with an average length and cost of the constructed circuits better than the previous methods.
881
There are multiple methods for building a computing device based on DNA, each with its own advantages and disadvantages. Most of these build the basic logic gates associated with digital logic from a DNA basis. Some of the different bases include DNAzymes, deoxyoligonucleotides, enzymes, and toehold exchange.
882
The most fundamental operation in DNA computing and molecular programming is the strand displacement mechanism. Currently, there are two ways to perform strand displacement:
883
Besides simple strand displacement schemes, DNA computers have also been constructed using the concept of toehold exchange. In this system, an input DNA strand binds to a sticky end, or toehold, on another DNA molecule, which allows it to displace another strand segment from the molecule. This allows the creation of modular logic components such as AND, OR, and NOT gates and signal amplifiers, which can be linked into arbitrarily large computers. This class of DNA computers does not require enzymes or any chemical capability of the DNA.
884
The full stack for DNA computing looks very similar to a traditional computer architecture. At the highest level, a C-like general purpose programming language is expressed using a set of chemical reaction networks . This intermediate representation gets translated to domain-level DNA design and then implemented using a set of DNA strands. In 2010, Erik Winfree's group showed that DNA can be used as a substrate to implement arbitrary chemical reactions. This opened the way to design and synthesis of biochemical controllers since the expressive power of CRNs is equivalent to a Turing machine. Such controllers can potentially be used in vivo for applications such as preventing hormonal imbalance.
885
Catalytic DNA catalyze a reaction when interacting with the appropriate input, such as a matching oligonucleotide. These DNAzymes are used to build logic gates analogous to digital logic in silicon; however, DNAzymes are limited to 1-, 2-, and 3-input gates with no current implementation for evaluating statements in series.
886
The DNAzyme logic gate changes its structure when it binds to a matching oligonucleotide and the fluorogenic substrate it is bonded to is cleaved free. While other materials can be used, most models use a fluorescence-based substrate because it is very easy to detect, even at the single molecule limit. The amount of fluorescence can then be measured to tell whether or not a reaction took place. The DNAzyme that changes is then "used", and cannot initiate any more reactions. Because of this, these reactions take place in a device such as a continuous stirred-tank reactor, where old product is removed and new molecules added.
887
Two commonly used DNAzymes are named E6 and 8-17. These are popular because they allow cleaving of a substrate in any arbitrary location. Stojanovic and MacDonald have used the E6 DNAzymes to build the MAYA I and MAYA II machines, respectively; Stojanovic has also demonstrated logic gates using the 8-17 DNAzyme. While these DNAzymes have been demonstrated to be useful for constructing logic gates, they are limited by the need for a metal cofactor to function, such as Zn2+ or Mn2+, and thus are not useful in vivo.
888
A design called a stem loop, consisting of a single strand of DNA which has a loop at an end, are a dynamic structure that opens and closes when a piece of DNA bonds to the loop part. This effect has been exploited to create several logic gates. These logic gates have been used to create the computers MAYA I and MAYA II which can play tic-tac-toe to some extent.
889
Enzyme-based DNA computers are usually of the form of a simple Turing machine; there is analogous hardware, in the form of an enzyme, and software, in the form of DNA.
890
Benenson, Shapiro and colleagues have demonstrated a DNA computer using the FokI enzyme and expanded on their work by going on to show automata that diagnose and react to prostate cancer: under expression of the genes PPAP2B and GSTP1 and an over expression of PIM1 and HPN. Their automata evaluated the expression of each gene, one gene at a time, and on positive diagnosis then released a single strand DNA molecule that is an antisense for MDM2. MDM2 is a repressor of protein 53, which itself is a tumor suppressor. On negative diagnosis it was decided to release a suppressor of the positive diagnosis drug instead of doing nothing. A limitation of this implementation is that two separate automata are required, one to administer each drug. The entire process of evaluation until drug release took around an hour to complete. This method also requires transition molecules as well as the FokI enzyme to be present. The requirement for the FokI enzyme limits application in vivo, at least for use in "cells of higher organisms". It should also be pointed out that the 'software' molecules can be reused in this case.
891
DNA nanotechnology has been applied to the related field of DNA computing. DNA tiles can be designed to contain multiple sticky ends with sequences chosen so that they act as Wang tiles. A DX array has been demonstrated whose assembly encodes an XOR operation; this allows the DNA array to implement a cellular automaton which generates a fractal called the Sierpinski gasket. This shows that computation can be incorporated into the assembly of DNA arrays, increasing its scope beyond simple periodic arrays.
892
DNA computing is a form of parallel computing in that it takes advantage of the many different molecules of DNA to try many different possibilities at once. For certain specialized problems, DNA computers are faster and smaller than any other computer built so far. Furthermore, particular mathematical computations have been demonstrated to work on a DNA computer.
893
DNA computing does not provide any new capabilities from the standpoint of computability theory, the study of which problems are computationally solvable using different models of computation. For example, if the space required for the solution of a problem grows exponentially with the size of the problem on von Neumann machines, it still grows exponentially with the size of the problem on DNA machines. For very large EXPSPACE problems, the amount of DNA required is too large to be practical.
894
A partnership between IBM and Caltech was established in 2009 aiming at "DNA chips" production. A Caltech group is working on the manufacturing of these nucleic-acid-based integrated circuits. One of these chips can compute whole square roots. A compiler has been written in Perl.
895
The slow processing speed of a DNA computer is compensated by its potential to make a high amount of multiple parallel computations. This allows the system to take a similar amount of time for a complex calculation as for a simple one. This is achieved by the fact that millions or billions of molecules interact with each other simultaneously. However, it is much harder to analyze the answers given by a DNA computer than by a digital one.
896
Most research projects focus on replacing current computer components with optical equivalents, resulting in an optical digital computer system processing binary data. This approach appears to offer the best short-term prospects for commercial optical computing, since optical components could be integrated into traditional computers to produce an optical-electronic hybrid. However, optoelectronic devices consume 30% of their energy converting electronic energy into photons and back; this conversion also slows the transmission of messages. All-optical computers eliminate the need for optical-electrical-optical conversions, thus reducing electrical power consumption.
897
Application-specific devices, such as synthetic-aperture radar and optical correlators, have been designed to use the principles of optical computing. Correlators can be used, for example, to detect and track objects, and to classify serial time-domain optical data.
898
The fundamental building block of modern electronic computers is the transistor. To replace electronic components with optical ones, an equivalent optical transistor is required. This is achieved by crystal optics . In particular, materials exist where the intensity of incoming light affects the intensity of the light transmitted through the material in a similar manner to the current response of a bipolar transistor. Such an optical transistor can be used to create optical logic gates, which in turn are assembled into the higher level components of the computer's central processing unit . These will be nonlinear optical crystals used to manipulate light beams into controlling other light beams.
899
Like any computing system, an optical computing system needs three things to function well:
900
optical processor