
I think the approach to implementing HDR is wrong. For a visual demonstration, I have attached a visualization of color spaces on the RGB color model. It shows that, as a rule, narrower spaces are part of wider ones. That is, when honestly converting, for example, a bitmap image from sRGB to DCI-P3, the colors should not become more saturated. On the contrary, they should remain in the same positions inside the sRGB triangle, but taking into account the output on the DCI-P3 display, you should not see any difference. In the context of a game, this means that by default, any object with sRGB textures should look the same in wider color spaces.
Then why do we need a wider color space? First, you can use textures originally made for DCI-P3. Secondly, certain visual effects can strongly distort colors (for example, a different spectrum of a Star, effects from shields, when hit by lasers, etc.).
I don't know exactly how the renderer works, but I think it first calculates the colors in the coordinates of the color model, and only before output converts it to an 8\10\12 bit format for the monitor in accordance with the target color space, or even focusing on the coverage information from EDID.