I think you're thinking about this the wrong way. The amount of encoding space is set by the instruction template they chose, which uses a total of 8 bits for opcode. That leaves 24 bits to encode everything else needed to make the instruction useful.
One bit gets burned on selecting 32-bit or 64-bit comparison. Three are used for the condition code. Five are used to encode the source register. No way to economize on these - at most I think you could shave one bit off the condition code field, if you were willing to make the instruction less capable.
The remaining 15 bits must be split between two values: the immediate that's compared to the source register, and the offset to add to the program counter if the branch is taken. They chose to go with a 9-bit offset, leaving 6 bits for the immediate value.
Would you like more than 9 bits of offset? Yes, absolutely. As Dougall comments, +/- 1KiB is enough to be useful, but still feels a little tight.
Would you like more than 6 bits for the immediate? Yes, absolutely. But 6 bits does encode zero and one, which are almost always the most frequently used immediate values in nearly every context, and by a fairly wide margin. So from a certain perspective, 6 is a luxury.
I don't think you could go the other direction and keep these instructions useful - offset size does matter quite a lot. So, the question is, would it have been better to have a 5- or even 4-bit immediate to double or quadruple the offset range? I don't pretend to know, but presumably Arm based this decision on analysis. It was probably someone's project to implement compiler support for several different split options, compile a bunch of testcases (probably including the entire SPEC suite) with each, and collect data on how often each split forced the compiler to avoid emitting a compare-and-branch instruction.
Edit: that kind of analysis was also no doubt used in deciding whether these instructions were worthwhile at all.