What are Classical Computers?

Have you ever wondered how computers came into existence? What exactly are classical computers, and how have they evolved over time? As technology advances, new frontiers are emerging that challenge the very foundation of these traditional machines. So, what are new challenges for classical computers?

1. History of development

Looking back at the history of computers, it’s essential to understand their development. Following the introduction of Arabic numerals, there was an increasing need to compute larger numbers for fields like navigation, construction, and military applications. This task was labor-intensive and prone to errors caused by clerical mistakes. This demand spurred many scientists, especially mathematicians, to create various computing machines. The journey began in 1642 with the invention of the first mechanical calculator in France. Later, in 1822, British scientist Charles Babbage (1791–1871), known as the “Father of Computers“, designed the Difference Engine, an early computer capable of performing mathematical calculations quickly. However, due to its large size, heavy construction, and reliance on a hand crank, Babbage also designed a more advanced machine, the Analytical Engine. This machine is considered an early form of a computer because its design included components similar to modern computers:

  • A mill, functioning like the CPU for calculations
  • A store, akin to memory
  • A reader, allowing data entry via punched cards
  • A printer, serving as the output device

Subsequently, many of these characteristics were also evident in the computers developed by German civil engineer Konrad Zuse (1910–1995), including the Z1 through Z4 generations. Notably, the Z1, the first programmable computer, used the binary number system, making it resemble modern computers. By 1939, the Z1 was improved and divided into separate functional units, such as input and output units, memory units, and an additional unit.

Because there is still controversy over which was the first computer to use binary numbers, I’ll focus on the limitations of classical computers that led to the development of quantum computing.

2. What are the challenges for classical computers?

Most modern computers use transistors because vacuum tubes proved to be limiting, as demonstrated by the ENIAC computer. Today, classical computers use electrical wires and circuits to process information. Each wire carries a signal that is either ON or OFF, known as a bit—the smallest unit of information a computer can store. These states represent two possible conditions: electrons flowing through or not, controlled by switches called transistors. A transistor acts as a gate that can either block or allow the flow of electrons. In other words, the transistor can only be in one state at a time. Transistors are combined to create logic gates, which perform computations like adding two numbers. To represent more complex information, additional wires are needed. Increasing computational power typically involves adding more transistors, which presents several challenges.

Recently, Intel has been working to reduce transistor sizes from 22 nm to 14 nm to pack more transistors into a given area. However, this miniaturization introduces several issues:

  • When transistors are shrinking more and more to the size of just a few atoms, it’s afraid that electrons may easily cross the switch through a process called Quantum Tunneling. This could undermine the functionality of traditional transistors and pose a physical barrier to technological progress.
  • Packing more transistors into a smaller area can lead to overheating and increased cooling difficulties. On the contrary, making CPUs larger is impractical because smaller die sizes allow for more transistors to be packed into the same area, resulting in higher processing power, they also consume less power, generate less heat, and reduce manufacturing costs per unit. Moreover, adding more transistors becomes increasingly costly.
  • Additionally, since a single transistor can only represent one state at a time, rapid data transmission can be limited by the time required for parity checks.

3. Summary

Moore’s Law, which states that the number of components on a chip doubles approximately every two years at minimal cost, has been predicted since 1965 and continues to hold. Therefore, this observation has also led to the realization that binary computing may not be the ultimate solution, prompting exploration into alternative approaches such as ternary computing and quantum computing.

関連記事

カテゴリー:

ブログ

情シス求人

  1. 登録されている記事はございません。
ページ上部へ戻る