The definition of 1 kelvin
That was the old definition.
Since May, the kelvin is defined by fixing the value of the Boltzmann constant: https://physics.nist.gov/cgi-bin/cuu/Value?k
This is consistent with the triple point of a specific kind of water (VSMOW) at 273.16 K.
It is also historically consistent with the still older definition of the size of the centigrade as 1/100 of the difference in temperature between freezing and boiling of water.
A different scale for absolute temperature is based on the size of a degree on the Fahrenheit scale. This is the Rankine scale where $1$ kelvin = $1.8\ ^\circ$R.
Edit: so your book was wrong. The triple point is at $273.16$ K which is $0.01\ ^\circ {\rm C}$ (as the triple point is slightly higher than the melting point of ice at atmospheric pressure).
To answer this question it may help to take an example from a more familiar area of physics, and then discuss temperature.
For a long time the kilogram (the SI unit of mass) was defined as the mass of a certain object kept in a vault in Paris. Then the gram can be defined as one thousandth of the mass of that object, and so on. If you now ask, what units are being used to state the mass of the chosen object? then it does not matter as long as they are proportional to the scale of units you want to adopt. So if someone were to tell you the mass of the special object in pounds (e.g. 2.2 pounds) then you would still know that one gram is a thousandth of that.
With temperature it goes similarly. There is a certain state of water, water vapour and ice all in mutual equilibrium. That state has a temperature independent of other details such as volume, as long as the substances are pure and they are not crushed up too small. So that state has a certain temperature. It has one unit of temperature in "triple point units" (a temperature scale that I just invented). When we say the Kelvin is a certain fraction of that temperature, we are saying that a thermometer whose indications are proportional to absolute temperature must be calibrated so as to register 273.16 when it is put into equilibrium with water at the triple point, if we wish the thermometer to read in kelvin. For example, if the thermometer is based on a constant-volume ideal gas then one should make the conversion factor from pressure in the gas to indicated temperature be a number which ensures the indicated temperature is 273.16 at the triple point. You then know that your gas thermometer is giving readings in kelvin, and you never needed to know any other units. (Note, such a thermometer is very accurate over a wide range of temperature, but it cannot be used below temperatures of a few kelvin. To get to the low temperature region you would need other types of thermometer. In principle they can all be calibrated to agree where their ranges overlap.)
(Thanks to Pieter for a detail which is signaled in the comments and now corrected in the text, but I hope the comment will remain.)
It may not be obvious from everyday experience with temperature, but it has a natural zero point, independent of any choice of scale.
Temperature is related to the internal motion of particles making up a substance--when all the internal motion ceases, the temperature is zero.
You can think of it like the concentration of dye in a tank of water. There's no ambiguity about what zero means: no dye means zero concentration. Accordingly, what you mean when you say "The concentration of dye in this tank is half the concentration in that one" does not depend on the units you use to specify the concentrations.
The confusion may arise from the fact that, unlike with most quantities that have a natural zero point (mass, kinetic energy, etc.) familiar temperature scales have an offset so that commonly encountered temperatures come out as smallish numbers.
So the answer to your question about which scale is used in the definition is: any one that doesn't impose such a shift.