Digital compute-in-memory (CIM) systems, known for their precise computations, have emerged as a viable solution for real-time deep neural network (DNN) inference. However, traditional digital CIM systems often suffer from suboptimal array utilization due to static multi-bit input/mapping dataflows and inflexible adder tree structures, which do not adequately accommodate the diverse computational demands of DNNs. In this paper, we introduce a novel digital CIM architecture that dynamically redistributes bit precisions across the input and mapping domains according to computational load and data precision, thereby improving array utilization and energy efficiency. For supporting flexible bit configurations, the system incorporates an adaptive adder tree with the integrated bit-shift logic. To minimize potential overhead introduced by the bit-shiftable adder tree, we also propose a grouping algorithm that efficiently executes shift and add operations. Simulation results show that our proposed methods not only improve array utilization but also significantly accelerate computation speed, achieving up to a 10.46× speedup compared to traditional methods.