Puppet has come a long way since it's initial release in 2005, however, it has usually followed client-server architecture, together with its nemesis Chef (which later on released chef-solo for running masterless). A lot of the more recent configuration management tools such as Ansible and Salt dropped this client-server architecture to some extent, offering agentless running from the get-go (not to be confused with masterless, agentless means no agent is needed on the host, masterless on the other hand means that an agent is needed on the host, without a master server required).
However, as you might (or might not) be aware, it has (pretty much always) been possible to run Puppet stand-alone as well. This is what is often referred to as a masterless Puppet setup, and is recently included in their offering in an official way as Puppet Bolt.
While I did not play with Puppet Bolt myself yet, there are multiple reason I could imagine why one would want to use Puppet Bolt. First of all, it is an officially (supported) offering by the Puppet company. Besides that, running the classical masterless Puppet requires solving a number of problems yourself to use on more than a single node. You need to sync all the modules to each node, copy any other files over, and run
puppet apply. Puppet Bolt might have some advantages here, as it is actually built to be a simple, agentless tool for running tasks on smaller infrastructures made up of a wide variety of remote hosts. This description is interesting, as it implies that it is agentless now (running from your local computer) which is also masterless, but achieved in a different way as I previously described above. This sounds more like Ansible than running
puppet apply on a remote server. To get your hands dirty with Puppet Bolt, you can try out their Tasks Hands-on-lab which walks through learning many of its features step by step.
Either way, back to the classical masterless Puppet workflow. I have been running this setup for multiple hosts for quite a while already, and so far it did not disappoint me. This does mean that all your modules (and hiera data) reside on every host that you manage this way. The Puppet Beginner's Guide describes this in more detail, and the related repositories are definitely worth checking out. The only thing that I found so far that doesn't work with masterles Puppet, are Exported Resources. If you ever stumble upon a warning or error like
You cannot collect exported resources without storeconfigs being set; the collection will be ignored or
Not collecting exported resources without storeconfigs when, for example, using a Puppet Forge module, this most likely means that the module is using storeconfigs, which you do not have enabled because you are not using PuppetDB since you're not running a Puppet Master. So basically, Puppet has nowhere to store resources it exports, or find resources that have been exported. This does however mean that you can not make use of the exported resources collecting. Exported resources are a nifty way to specify a desired state for a resource, and publishes the resource for use by other nodes, which can then collect them. This helps manage things that rely on nodes knowing the states of other nodes (eg. for configuring monitoring, backups).
Forge Modules often have a parameter that is called something like
collect_exported that you can set to
false so that it stops trying to use this functionality. You then have to define the nodes that need monitoring and backups explicitly yourself, for example, via hiera data (in YAML). From the looks of it, Puppet Bolt has not solved this problem yet, so besides allowing you to run Puppet from your local machine, and a bit of a different way of setting up manifests as you can see here, it doesn't seem to add much yet.