In many machine learning tasks it is desirable that a model’s prediction transforms in an equivariant way under transformations of its input. Convolutional neural networks (CNNs) implement translational equivariance by construction; for other transformations, however, they are compelled to learn the proper mapping. In this work, we develop Steerable Filter CNNs which achieve joint equivariance under translations and rotations. The proposed architecture employs steerable filters to efficiently compute orientation dependent responses for many orientations without suffering interpolation artifacts from filter rotation. We utilize group convolutions which guarantee an equivariant mapping while allowing to preserve the full information on the extracted features and their relative orientations. A commonly used weight initialization scheme is generalized from pixel based filters to filters which are defined as a linear combinations of a system of atomic filters. The proposed approach significantly improves upon the state-of-the-art on the rotated MNIST benchmark as well as on the ISBI 2012 2D EM segmentation challenge.