All convolutions in the dense block are ReLU-activated and use batch normalization. Channel-wise concatenation is simply possible if the height and width dimensions of the data stay unchanged, so convolutions within a dense block are all of stride 1. Pooling levels are inserted between dense blocks for additional dimensionality https://financefeeds.com/top-7-meme-coins-to-invest-in-now-explosive-copyright-picks-of-2025/