Semiconductors

The term "semiconductor" refers to a material whose electrical conductivity is intermediate between that of metals and insulators. This property allows the quantity and direction of electrical current flowing through a device to be controlled very precisely. Better still, their behavior can be adjusted to respond to heat, light, or other electrical signals. This makes it possible to create components capable of generating, storing, and transmitting information. Semiconductors are now at the heart of all the electronics that surround us, from washing machines to smartphones.

Silicon

There are several semiconductor materials, but silicon is by far the most widely used, mainly because it is abundant in nature and has electrical and thermal properties that are suitable for many electronic applications. Other materials, such as tellurium and silicon carbide, are used for more specific applications.

CPU

More commonly known as a processor, it is often compared to the brain of a computer or server. This component interprets signals and makes calculations possible. In the field of AI, the CPU manages general tasks such as orchestration, network communication, memory management, and incoming and outgoing data flows. It is crucial for distributing large amounts of data without slowing down the entire process.

GPU

The graphics processor excels at parallel processing thanks to its many cores, which are less powerful individually than those of a CPU, although capable of handling a large volume of simultaneous operations. It is this architecture that makes it an essential ally for tasks related to AI or graphics rendering.

ASIC

ASICs are chips designed for a specific task. In the field of AI, some ASICs, called NPUs, are optimized for specific use cases such as speech recognition or computer vision. Less versatile but more efficient, they also consume less energy than more general-purpose chips such as GPUs or TPUs.

TPU

Short for Tensor Processing Unit, this is a processor developed by Google to accelerate the training of its AI models. Less flexible than a GPU, it is, however, extremely efficient at specific tasks such as matrix calculations.

SoC (System on Chip)

A design largely dominated by Arm. Systems on a chip (SoC) integrate several modules, CPUs, GPUs, device controllers, etc., on a single chip. They are found in smartphones, tablets, and even some laptops. They allow components to be compacted, making devices thinner and lighter.

XPU

A generic term for all specialized processing units (CPUs, GPUs, TPUs, etc.). Initially used in marketing, it is now widely adopted in the AI world when referring to all the chips in a server or infrastructure.

Architecture

This is how a processor is designed to operate: information processing, communication with memory, number of simultaneous instructions, etc. The two main architectures are: x86 (used mainly by Intel and AMD) and Arm (designed by the eponymous company).

Fabless

This term refers to companies that design chips (blueprints, architecture, performance, etc.) without owning a manufacturing plant. They outsource this task to specialized foundries.

Foundries

Foundries, such as TSMC, manufacture chips based on designs provided by designers. This highly delicate stage takes place in a fab, a clean room containing all the machines essential for production.

OEM

An OEM (Original Equipment Manufacturer) is a company that manufactures equipment for other brands. In AI, OEMs assemble servers from Nvidia chips so that companies can train and use their models.

Wafer

This is a thin round slice of silicon on which thousands of electronic circuits are engraved. It is the starting point for all AI chips. Once manufactured, the wafer is cut into individual units (or dies) that will become GPUs, CPUs, or ASICs.

Etching processes

These are the techniques (EUV, photolithography, etc.) used to draw transistors on the wafer. Etching is done on a nanometric scale, and its fineness allows for increased power while improving energy efficiency.

Node

This refers to the fineness of a transistor's etching, expressed in nanometers (3 nm, for example). The smaller the node, the more powerful, compact, and energy-efficient the chips are.

Transistors

A transistor is a tiny electronic component that controls the flow of current in a circuit, like a switch or amplifier. Found in our chips in their billions, they are essential to the functioning of processors, the engines of AI.

Moore's Law

In 1965, Gordon Moore, co-founder of Intel, predicted that the number of transistors on a chip would double every year. He adjusted his prediction in 1975 to every two years. This law, which has become a symbol of innovation, is now slowing down, with the doubling now taking place every three years.

LLM

A large language model (or LLM) is an artificial intelligence program capable of understanding and generating text. It relies on a huge database and continues to improve as it is used.

Training

This is the phase where AI is "fed" with a massive database. This process is essential for enabling it to learn how to solve problems without explicit instructions.

Inference

Inference refers to an AI's ability to draw conclusions from what it has learned. Example: an AI trained to recognize cars will then be able to identify the make and model of an unknown vehicle in a new database.

Fine-tuning

After general training, AI can be refined with more targeted data to specialize it. This results in highly accurate models for tasks such as machine translation or detecting a specific organ in a medical image.