Even Hex for machine code is only used by us as its a lot easier to remember & shorter write down/type than binary, then the joys of machine code being translated to the lowest level of all "microcode" which gladly only the engineers etc that design the cpu's have to deal with. 
Technically machine code is the lowest level available to us mere mortals to use but is not the lowest level that exists in a cpu. At the end of the day the cpu only understands 0's & 1's of which I do not know anyone that writes code in pure binary, bet there is someone somewhere though that does 
What's the difference between 0x40 & INC AX, answer is nothing. Both are exactly the same & neither are understood by the cpu without being translated by a higher lvl language into binary (which is all the computer understands natively) & then executed/stored.
So I am going to stand by my "negligible" comment but that is my personal opinion, you have yours & I respect that as well.
Lee
First of all, microcode is the very part of the processor that we are trying to program using machine code. If you want to go lower than that, you'd have to manually apply the electrical voltages to the transistors yourself. So, being mere mortals, let's get back to reality.
Machine code is binary, and only binary. The hexadecimal format is not used as a mnemonic code because it's easier to remember, but because the computers we usually work on do not have the ability to display real binary representation in zeroes and ones. Furthermore, these hexadecimal values need no further conversion, because they already represent the binary values in their bits; for example, 0xC has a binary value of 1100, 0xAA has a binary value of 10101010, and 0xAFC has a binary value of 101011111100. That's not a conversion, but the direct binary bit values of the hexadecimal bytes, which is the machine code in zeroes and ones.
Even if you were to input binary code directly, the sequences that you edit on the screen wouldn't be the real binary values. In fact, it would take more effort for an editor to display binary representations on the screen, having to convert them back and forth between displayable sequences of zeroes and ones, and usable/readable values, be they decimal, octal, or even hexadecimal.
And to your question, "What's the difference between 0x40 & INC AX?", the answer is plenty. Firstly,
inc ax is an Assembly language instruction that requires compilation before the processor can understand it, while 0x40 is only a hexadecimal value. However, if this hexadecimal value is used in the context of a high level language, it would have to be compiled before the processor can understand it. But, on the other hand, if it is used in the context of a raw hex-editor, then the processor would readily be able to understand its binary bit values without any conversion or compilation.
With all due respect, this is not a matter of opinion, but rather one of fact. Assembly language is not machine code, and since one is readily understandable by the processor, and the other needs to be compiled before it is, clearly makes a big difference. Far from negligible.