Early Temperature Measurement Attempts
Before the invention of the thermometer, humans had only crude ways to describe temperature. Ancient civilizations spoke of degrees of heat in vague terms—slightly warm, hot, burning hot—but had no way to make precise comparisons or measurements. This limitation had serious practical consequences. Metalworkers couldn't reliably reproduce the exact temperatures needed for consistent alloys. Brewers couldn't control fermentation temperatures precisely. Doctors had no way to track fever progression objectively.
The breakthrough came in the early 17th century with the invention of the thermoscope, a precursor to the modern thermometer. Galileo Galilei is often credited with creating the first thermoscope around 1593, though the exact inventor remains uncertain. Galileo's device was elegantly simple: a glass bulb connected to a narrow tube, partially filled with water. As air in the bulb expanded or contracted with temperature changes, the water level in the tube moved up or down, providing a visible indication of temperature change.
These early instruments faced fundamental problems that would take decades to solve. They were influenced by atmospheric pressure as well as temperature, making them unreliable. They had no standardized scale, so readings from different instruments couldn't be compared. Most critically, they provided only relative measurements—you could tell that today was warmer than yesterday, but you couldn't assign meaningful numbers to the temperatures.
The Grand Duke of Tuscany, Ferdinand II de' Medici, sponsored crucial improvements in the 1650s. He created the first sealed thermometers, eliminating atmospheric pressure effects by using a liquid (usually alcohol) in a completely closed glass tube. These "Florentine thermometers" represented a major advance, but they still lacked standardized scales.
The scale problem proved particularly vexing because temperature, unlike length or weight, has no natural zero point that everyone could agree upon. Different inventors tried various approaches. Some used the temperature of melting snow as zero, others used human body temperature, and still others chose the temperature of wine cellars or deep caves, which seemed relatively constant throughout the year.
Early attempts at standardization reflected the practical needs and available materials of their time. The Academia del Cimento in Florence used a scale based on the temperatures of melting snow and the human body, dividing the range into degrees. Robert Hooke in London proposed using the freezing point of water as zero. Each approach had merit, but without international communication and standardization, different regions developed incompatible temperature measurements.
The instruments themselves remained primitive by modern standards. Glass bulbs were blown by hand, making them irregular in volume. Alcohol was the preferred liquid because it didn't freeze at normally encountered temperatures, but its expansion rate wasn't perfectly linear, leading to inaccurate readings. Mercury, which would later become the standard, wasn't widely used initially because it was expensive and its toxic properties weren't well understood.
By the early 1700s, dozens of different temperature scales were in use across Europe, each with different zero points and different-sized degrees. A temperature reading from London couldn't be meaningfully compared with one from Paris or Rome. This chaos created obvious problems for scientists trying to share experimental results, merchants shipping temperature-sensitive goods, and anyone trying to understand weather patterns across different regions.