It definitely seems like the wrong approach to me to have puppet manage your base images. They're not VM's, they shouldn't have multiple services, they shouldn't require any complex configuration management, they should just be the minimum requirements to support your application's local runtime dependencies, and that's it.
From previous experience migrating from a puppet setup to one that used containers, puppet's vestigial use case ends up being to get the orchestration control plane itself setup (ie. kubernetes, networking configs, etc) and that's about it.
There's nothing inherently about Puppet that means it has to manage multi-service "whole OS"-like installations. It can just as easily be put to the task of a Dockerfile: install dependencies and deployables for a single application. Its robust ability to manage things like user accounts, packages, scheduled jobs (e.g. for alerting, though you would have to install at least a second service for this: _crond) and the like makes it vastly superior to Dockerfile shell scripts for complex tasks.
Think of puppet more like a way of simplifying your Dockerfiles to have fewer crazy shell commands in total, rather than hiding the craziness in layers and hoping it all composes properly. If you do use lots of layers, Puppet can make your life much easier, since it can be better at detecting previous layers' changes and working around them (think redundant package install commands. Even the no-op "already installed!" command takes time; if you're installing hundreds of packages--many people are, for better or worse--that can eat up build time).
Puppet isn't a VM provisioner; it can also be used as a replacement for large parts of your Dockerfile, or a better autoconf to set up the environment/deps for your application to run in.
The point about layer complexity is a great one I didn't even consider. Your "config" step is no longer a mish mash of dozens COPY/RUN/etc directives (resulting in N new intermediate image layers), it just results in a single atomic layer where you run the Puppet bootstrap.
Obviously you could accomplish this with shell scripts as well to constrain your config step into one docker RUN directive, but I prefer the declarative state approach to the imperative one in this case.
1) I think you missed my point entirely here, I probably didn't do a good job of explaining it. I was trying to say that you run Puppet once at build time to bootstrap the configuration for the image, that's it. You could even uninstall it at the last build step if you want to reduce final image size. The primary distinction here is declarative vs imperative configuration management.
2) The one process-per-container dogma isn't necessarily the only way to run a successful docker stack. For example, I don't see anything wrong with using supervisor to manage whatever process you're running in your container.
From previous experience migrating from a puppet setup to one that used containers, puppet's vestigial use case ends up being to get the orchestration control plane itself setup (ie. kubernetes, networking configs, etc) and that's about it.