Let me restate, We don’t have drag and drop in the way users expect and we have never had.
Drag and drop means you click on an object in one place and drag (without releasing) onto another part of the screen and release the button. The impressive part is that the source and target are two different Windows or even applications. As you do this the new object is under your cursor all the time and as you release it is now either a copy of the original (depending on the product you use). Many products have this behavior. Some of us remember when Windows introduced it - it felt like voodoo.
What we have between the hotbar and the drawing surface is select, move, select (and then drag).
BtW - The reason we don’t is that the combination of windowing APIs we use (right now) do not support this without having to implement multipolar versions.
The hotbar and the draw surface are two different widgets in gtk and two Windows in Windows. When the mouse is dragged it does not remember the click in the other window/widget and appears as a simple mouse movement. The button is pressed but the click was not registered so the code doesn’t do what is nemeses.