Compute-in-memory (CIM) is an efficient method for implementing deep neural networks (DNNs), but its architectures face significant overhead from analog-to-digital converters (ADCs), especially as ADC precision increases. Low-precision ADCs can reduce this overhead but introduce partial-sum quantization errors degrading accuracy. Further degradation stems from low-bit weight constraints due to cell limitations and the need for multiple cells for higher-bit weights. While fine-grained partial-sum quantization has been explored to lower ADC resolution effectively, weight quantization granularity, which limits overall partial-sum quantized accuracy, remains underexplored and offers room for improvement. In this work, we address these challenges by the alignment of quantization granularities for both weights and partial-sums, particularly at the column-wise level. Our method enhances accuracy while maintaining dequantization overhead, simplifies training by removing two-stage processes, and remains robust to memory cell variations due to independent column-wise scale factors. We also propose an open-source CIM-oriented convolution framework to handle fine-grained weights and partial-sums efficiently, which incorporates a novel tiling method and group convolution. Our experimental results on ResNet-20 (CIFAR-10, CIFAR-100) and ResNet-18 (ImageNet) show notable accuracy improvements of 0.99%, 2.69%, and 1.01%, respectively, compared to the best-performing related works. Additionally, comprehensive variation analysis reveals the robustness of our method against memory cell variations. These findings highlight the effectiveness of our quantization scheme in enhancing accuracy and robustness while maintaining hardware efficiency in CIM-based DNN implementations.