diff --git a/docs/site/docs/cookbook/acknowledgements.mdx b/docs/site/docs/cookbook/acknowledgements.mdx
index 96b70ad30cb657b04dd2d55844cc8a4e3a1e5b51..91e23c4af11a6a860caf50bbe02eccd837c66944 100644
--- a/docs/site/docs/cookbook/acknowledgements.mdx
+++ b/docs/site/docs/cookbook/acknowledgements.mdx
@@ -8,9 +8,9 @@ import TabItem from '@theme/TabItem';
 
 While consuming the messages we could issue acknowledgements, to denote that the messages were (or were not) processed successfully.
 
-The following example produces 10 sample messages, and then acknowledges all of them, except for the messages 3, 5 and 7. The message #3 receives a negative acknowledgement, which puts is back in the stream for the repeated processing. On the second attempt the message #3 gets acknowledged. The messages 5 and 7 remain unacknowledged.
+Here is the snippet that expects 10 sample messages in the default stream. When consuming the messages, the message #3 receives a negative acknowledgement, which puts is back in the stream for the repeated processing, and the messages 5 and 7 remain unacknowledged. On the second attempt the message #3 gets acknowledged.
 
-Only the acknowledgements-relevant parts would be explained here. Look at the corresponding examples to learn about producers and consumers in detailes.
+You can found the full example in git repository.
 
 <Tabs
   groupId="language"
@@ -22,14 +22,39 @@ Only the acknowledgements-relevant parts would be explained here. Look at the co
 }>
 <TabItem value="python">
 
-```python content="./examples/python/acknowledgements.py"
+```python content="./examples/python/acknowledgements.py" snippetTag="consume"
 ```
 
 </TabItem>
 
 <TabItem value="cpp">
 
-```cpp content="./examples/cpp/acknowledgements.cpp"
+```cpp content="./examples/cpp/acknowledgements.cpp" snippetTag="consume"
+```
+
+</TabItem>
+</Tabs>
+
+The list of unacknowledged messages can be accessed at any time. This snippet prints the list of unacknowledged messages.
+
+<Tabs
+  groupId="language"
+  defaultValue="python"
+  values={[
+    { label: 'Python', value: 'python', },
+    { label: 'C++', value: 'cpp', },
+  ]
+}>
+<TabItem value="python">
+
+```python content="./examples/python/acknowledgements.py" snippetTag="print"
+```
+
+</TabItem>
+
+<TabItem value="cpp">
+
+```cpp content="./examples/cpp/acknowledgements.cpp" snippetTag="print"
 ```
 
 </TabItem>
diff --git a/docs/site/docs/cookbook/datasets.mdx b/docs/site/docs/cookbook/datasets.mdx
index 0c8cf6564169ddc6873e4d925d648181d24aafdb..395d16fc8c3ac4f6571e0d502bc38067910b97ff 100644
--- a/docs/site/docs/cookbook/datasets.mdx
+++ b/docs/site/docs/cookbook/datasets.mdx
@@ -8,10 +8,10 @@ import TabItem from '@theme/TabItem';
 
 The messages in the stream can be multi-parted. If you have several producers (e.g. sub-detectors) that produces several parts of the single message, you can use datasets to assemble a single message from several parts.
 
-Only the dataset-relevant parts would be explained here. Look at the corresponding examples to learn about producers and consumers in detailes.
-
 ## Dataset Producer
 
+Here is the code snippet that can be used to produce a three-parted dataset. The full usable example can be found in git repository.
+
 <Tabs
   groupId="language"
   defaultValue="python"
@@ -22,14 +22,14 @@ Only the dataset-relevant parts would be explained here. Look at the correspondi
 }>
 <TabItem value="python">
 
-```python content="./examples/python/produce_dataset.py"
+```python content="./examples/python/produce_dataset.py" snippetTag="dataset"
 ```
 
 </TabItem>
 
 <TabItem value="cpp">
 
-```cpp content="./examples/cpp/produce_dataset.cpp"
+```cpp content="./examples/cpp/produce_dataset.cpp" snippetTag="dataset"
 ```
 
 </TabItem>
@@ -39,6 +39,8 @@ You should see the "successfuly sent" message in the logs, and the file should a
 
 ## Dataset Consumer
 
+Here is the snippet that can be used to consume a dataset. The full example is also in git.
+
 <Tabs
   groupId="language"
   defaultValue="python"
@@ -49,18 +51,17 @@ You should see the "successfuly sent" message in the logs, and the file should a
 }>
 <TabItem value="python">
 
-```python content="./examples/python/consume_dataset.py"
+```python content="./examples/python/consume_dataset.py" snippetTag="dataset"
 ```
 
 </TabItem>
 
 <TabItem value="cpp">
 
-```cpp content="./examples/cpp/consume_dataset.cpp"
+```cpp content="./examples/cpp/consume_dataset.cpp" snippetTag="dataset"
 ```
 
 </TabItem>
 </Tabs>
 
-
 The details about the received dataset should appear in the logs, together with the message "stream finished" (if the "finished" flag was sent for the stream). The "stream ended" message will appear for non-finished streams, but may also mean that the stream does not exist (or was deleted).
diff --git a/docs/site/docs/cookbook/metadata.mdx b/docs/site/docs/cookbook/metadata.mdx
index c88c297779075baa256bfe8954befee43513e691..fe2698d9f08c745a0f492c62a484a1be4e265a29 100644
--- a/docs/site/docs/cookbook/metadata.mdx
+++ b/docs/site/docs/cookbook/metadata.mdx
@@ -8,7 +8,172 @@ import TabItem from '@theme/TabItem';
 
 You can also store any custom metadata with your beamtime, stream, and each message. This tutorial shows you how you can store, update and access this metadata. The metadata is stored in JSON, and any JSON structure is supported.
 
-Only the metadata-relevant parts would be explained here. Look at the corresponding examples to learn about producers and consumers in detailes.
+:::info
+Since C++ doesn't have a built-in JSON support, you'd have to use 3rd party libs if you want JSON parsing. In this tutorial we won't use any JSON parsing for C++, and will treat JSONs as regular strings. Please note, that ASAP::O only supports valid JSONs, and providing invalid input will result in error.
+:::
+
+
+## Send Metadata
+
+The following snippet shows how to send the beamtime metadata.
+
+<Tabs
+  groupId="language"
+  defaultValue="python"
+  values={[
+    { label: 'Python', value: 'python', },
+    { label: 'C++', value: 'cpp', },
+  ]
+}>
+<TabItem value="python">
+
+```python content="./examples/python/metadata.py" snippetTag="beamtime_set"
+```
+
+</TabItem>
+
+<TabItem value="cpp">
+
+
+```cpp content="./examples/cpp/metadata.cpp" snippetTag="beamtime_set"
+```
+
+</TabItem>
+</Tabs>
+
+Each metadata can be updated at any moment. Here is the example on how to do it with beamtime metadata.
+
+<Tabs
+  groupId="language"
+  defaultValue="python"
+  values={[
+    { label: 'Python', value: 'python', },
+    { label: 'C++', value: 'cpp', },
+  ]
+}>
+<TabItem value="python">
+
+```python content="./examples/python/metadata.py" snippetTag="beamtime_update"
+```
+
+</TabItem>
+
+<TabItem value="cpp">
+
+
+```cpp content="./examples/cpp/metadata.cpp" snippetTag="beamtime_update"
+```
+
+</TabItem>
+</Tabs>
+
+The same way the metadata can be set for each stream.
+
+<Tabs
+  groupId="language"
+  defaultValue="python"
+  values={[
+    { label: 'Python', value: 'python', },
+    { label: 'C++', value: 'cpp', },
+  ]
+}>
+<TabItem value="python">
+
+```python content="./examples/python/metadata.py" snippetTag="stream_set"
+```
+
+</TabItem>
+
+<TabItem value="cpp">
+
+
+```cpp content="./examples/cpp/metadata.cpp" snippetTag="stream_set"
+```
+
+</TabItem>
+</Tabs>
+
+And for each message
+
+<Tabs
+  groupId="language"
+  defaultValue="python"
+  values={[
+    { label: 'Python', value: 'python', },
+    { label: 'C++', value: 'cpp', },
+  ]
+}>
+<TabItem value="python">
+
+```python content="./examples/python/metadata.py" snippetTag="message_set"
+```
+
+</TabItem>
+
+<TabItem value="cpp">
+
+
+```cpp content="./examples/cpp/metadata.cpp" snippetTag="message_set"
+```
+
+</TabItem>
+</Tabs>
+
+## Read Metadata
+
+Here we will read the beamtime metadata. In this example it will already incorporate the changes we did during the update
+
+<Tabs
+  groupId="language"
+  defaultValue="python"
+  values={[
+    { label: 'Python', value: 'python', },
+    { label: 'C++', value: 'cpp', },
+  ]
+}>
+<TabItem value="python">
+
+```python content="./examples/python/metadata.py" snippetTag="beamtime_get"
+```
+
+</TabItem>
+
+<TabItem value="cpp">
+
+
+```cpp content="./examples/cpp/metadata.cpp" snippetTag="beamtime_get"
+```
+
+</TabItem>
+</Tabs>
+
+Same for the stream.
+
+<Tabs
+  groupId="language"
+  defaultValue="python"
+  values={[
+    { label: 'Python', value: 'python', },
+    { label: 'C++', value: 'cpp', },
+  ]
+}>
+<TabItem value="python">
+
+```python content="./examples/python/metadata.py" snippetTag="stream_get"
+```
+
+</TabItem>
+
+<TabItem value="cpp">
+
+
+```cpp content="./examples/cpp/metadata.cpp" snippetTag="stream_get"
+```
+
+</TabItem>
+</Tabs>
+
+And for the message.
 
 <Tabs
   groupId="language"
@@ -20,16 +185,15 @@ Only the metadata-relevant parts would be explained here. Look at the correspond
 }>
 <TabItem value="python">
 
-```python content="./examples/python/metadata.py"
+```python content="./examples/python/metadata.py" snippetTag="message_get"
 ```
 
 </TabItem>
 
 <TabItem value="cpp">
 
-Since C++ doesn't have a built-in JSON support, you'd have to use 3rd party libs if you want JSON parsing. In this tutorial we won't use any JSON parsing, and will treat JSONs as regular strings. Please note, that ASAP::O only supports valid JSONs, and providing invalid input will result in error.
 
-```cpp content="./examples/cpp/metadata.cpp"
+```cpp content="./examples/cpp/metadata.cpp" snippetTag="message_get"
 ```
 
 </TabItem>
diff --git a/docs/site/docs/cookbook/next_stream.mdx b/docs/site/docs/cookbook/next_stream.mdx
index bec38c88bea9a850fa6cd609934c62978dc6eedf..25af3a4a3428140619300a1098146b107a6f81f2 100644
--- a/docs/site/docs/cookbook/next_stream.mdx
+++ b/docs/site/docs/cookbook/next_stream.mdx
@@ -8,7 +8,7 @@ import TabItem from '@theme/TabItem';
 
 When all the data in the stream is sent, the stream can be finished, and it is posiible to set the "next stream" to follow up the first. In this tutorial it'll be shown how several streams can be chained together in single consumer by using the stream finishing.
 
-Only the stream chaining-relevant parts would be explained here. Look at the corresponding examples to learn about producers and consumers in detailes.
+The setting of the next stream is done by providing an additional parameter while finishing the stream
 
 <Tabs
   groupId="language"
@@ -20,17 +20,42 @@ Only the stream chaining-relevant parts would be explained here. Look at the cor
 }>
 <TabItem value="python">
 
-```python content="./examples/python/next_stream.py"
+```python content="./examples/python/next_stream.py" snippetTag="next_stream_set"
 ```
 
 </TabItem>
 
 <TabItem value="cpp">
 
-```cpp content="./examples/cpp/next_stream.cpp"
+```cpp content="./examples/cpp/next_stream.cpp" snippetTag="next_stream_set"
 ```
 
 </TabItem>
 </Tabs>
 
-The output will show the messages being consumed from the streams in order. First, the ```default``` stream, then the ```next```.
+The reading of the streams can be then chained together. When one stream finishes, and the next stream is provided, the reading of the next stream can immediately start. This example will read the whole chain of streams, until it encounters the non-finished stream, or the stream that was finished without the ```next```.
+
+<Tabs
+  groupId="language"
+  defaultValue="python"
+  values={[
+    { label: 'Python', value: 'python', },
+    { label: 'C++', value: 'cpp', },
+  ]
+}>
+<TabItem value="python">
+
+```python content="./examples/python/next_stream.py" snippetTag="read_stream"
+```
+
+</TabItem>
+
+<TabItem value="cpp">
+
+```cpp content="./examples/cpp/next_stream.cpp" snippetTag="read_stream"
+```
+
+</TabItem>
+</Tabs>
+
+The output will show the messages being consumed from the streams in order. For this example (full file can be found in git repository) it'll be first the ```default``` stream, then the ```next```.
diff --git a/docs/site/docs/cookbook/query.mdx b/docs/site/docs/cookbook/query.mdx
index b1d5931938da81ce9a09affa4981d99b1115e593..91a5ede745773b1c4ce57726a1364572cd4bef51 100644
--- a/docs/site/docs/cookbook/query.mdx
+++ b/docs/site/docs/cookbook/query.mdx
@@ -8,7 +8,113 @@ import TabItem from '@theme/TabItem';
 
 Messages in streams can be retrieved based on their metadata. Both the technical information (e.g. ID or timestamp) and the user metadata (see [this tutorial](metadata) for details) can be used to make a query. In this tutorial several examples of the queries are shown. The standard SQL sysntaxis is used.
 
-Only the query-relevant parts would be explained here. Look at the corresponding examples to learn about producers and consumers in detailes.
+For this example we expect several messages in the default stream with the metadata consisting of two fields: a string named ```condition``` and an integer named ```somevalue```. Go to the git repository for the full example.
+
+:::info
+Keep in mind, that the query requests return only the list of metadatas for the found messages, not the messages itself. You need to explicitly retrieve the actual data for each message.
+:::
+
+Here we can pick a message with the specific ID.
+
+<Tabs
+  groupId="language"
+  defaultValue="python"
+  values={[
+    { label: 'Python', value: 'python', },
+    { label: 'C++', value: 'cpp', },
+  ]
+}>
+<TabItem value="python">
+
+```python content="./examples/python/query.py" snippetTag="by_id"
+```
+
+</TabItem>
+
+<TabItem value="cpp">
+
+```cpp content="./examples/cpp/query.cpp" snippetTag="by_id"
+```
+
+</TabItem>
+</Tabs>
+
+We can also use the simple rule for picking a range of IDs
+
+<Tabs
+  groupId="language"
+  defaultValue="python"
+  values={[
+    { label: 'Python', value: 'python', },
+    { label: 'C++', value: 'cpp', },
+  ]
+}>
+<TabItem value="python">
+
+```python content="./examples/python/query.py" snippetTag="by_ids"
+```
+
+</TabItem>
+
+<TabItem value="cpp">
+
+```cpp content="./examples/cpp/query.cpp" snippetTag="by_ids"
+```
+
+</TabItem>
+</Tabs>
+
+We can query the messages based on their metadata, for example request a specific value of the string field.
+
+<Tabs
+  groupId="language"
+  defaultValue="python"
+  values={[
+    { label: 'Python', value: 'python', },
+    { label: 'C++', value: 'cpp', },
+  ]
+}>
+<TabItem value="python">
+
+```python content="./examples/python/query.py" snippetTag="string_equal"
+```
+
+</TabItem>
+
+<TabItem value="cpp">
+
+```cpp content="./examples/cpp/query.cpp" snippetTag="string_equal"
+```
+
+</TabItem>
+</Tabs>
+
+We can also require some more complex constraints on the metadata, e.g. a range for an integer field
+
+<Tabs
+  groupId="language"
+  defaultValue="python"
+  values={[
+    { label: 'Python', value: 'python', },
+    { label: 'C++', value: 'cpp', },
+  ]
+}>
+<TabItem value="python">
+
+```python content="./examples/python/query.py" snippetTag="int_compare"
+```
+
+</TabItem>
+
+<TabItem value="cpp">
+
+```cpp content="./examples/cpp/query.cpp" snippetTag="int_compare"
+```
+
+</TabItem>
+</Tabs>
+
+Since every message comes with a timestamp, we can make constraints on it as well. For example, request all the messages from the last 15 minutes.
 
 <Tabs
   groupId="language"
@@ -20,17 +126,17 @@ Only the query-relevant parts would be explained here. Look at the corresponding
 }>
 <TabItem value="python">
 
-```python content="./examples/python/query.py"
+```python content="./examples/python/query.py" snippetTag="timestamp"
 ```
 
 </TabItem>
 
 <TabItem value="cpp">
 
-```cpp content="./examples/cpp/query.cpp"
+```cpp content="./examples/cpp/query.cpp" snippetTag="timestamp"
 ```
 
 </TabItem>
 </Tabs>
 
-The output will show the message selection together with the conditions used for selection.
+The output of the full example will show the message selection together with the conditions used for selection.
diff --git a/docs/site/docs/cookbook/simple-consumer.mdx b/docs/site/docs/cookbook/simple-consumer.mdx
index 41577dfd892456f9ed7f2db0550f13de490246ab..402fc64a0349b3b93343d5b0dd4299ede0708977 100644
--- a/docs/site/docs/cookbook/simple-consumer.mdx
+++ b/docs/site/docs/cookbook/simple-consumer.mdx
@@ -6,10 +6,12 @@ title: Simple Consumer
 import Tabs from '@theme/Tabs';
 import TabItem from '@theme/TabItem';
 
-This example shows how to consume a message. It also shows how to organize the simple loop or extract metadata.
+This example shows how to consume a message. This page provides snippets for simple consumer. You can go to BitBucket to see the whole example at once. The files there is a working example ready for launch.
 
 A special access token is needed to create a consumer. For the purpose of this tutorial a special "test" token is used. It will only work for the beamtime called "asapo_test".
 
+First step is to create an instance of the consumer.
+
 <Tabs
   groupId="language"
   defaultValue="python"
@@ -21,39 +23,160 @@ A special access token is needed to create a consumer. For the purpose of this t
 }>
 <TabItem value="python">
 
-```python content="./examples/python/consume.py"
+```python content="./examples/python/consume.py" snippetTag="create"
 ```
 
-Execute it with python3
+</TabItem>
+
+<TabItem value="cpp">
 
+```cpp content="./examples/cpp/consume.cpp" snippetTag="create"
 ```
-$ python3 consumer.py
+
+</TabItem>
+
+<TabItem value="c">
+
+```c content="./examples/c/consume.c" snippetTag="create"
+```
+
+</TabItem>
+
+</Tabs>
+
+You can list all the streams within the beamtime.
+
+<Tabs
+  groupId="language"
+  defaultValue="python"
+  values={[
+    { label: 'Python', value: 'python', },
+    { label: 'C++', value: 'cpp', },
+    { label: 'C', value: 'c', },
+  ]
+}>
+<TabItem value="python">
+
+```python content="./examples/python/consume.py" snippetTag="list"
 ```
 
 </TabItem>
 
 <TabItem value="cpp">
 
-```cpp content="./examples/cpp/consume.cpp"
+```cpp content="./examples/cpp/consume.cpp" snippetTag="list"
 ```
 
-Compile e.g. using CMake and execute. You might need to point cmake (with CMAKE_PREFIX_PATH) to asapo installation and curl library if installed to non-standard location.
+</TabItem>
 
-```shell content="./examples/cpp/CMakeLists.txt" snippetTag="#consumer"
+</Tabs>
+
+The actual consuming of the message will probably be done in a loop. Here is an example how such a loop could be organized. It will run until the stream is finished, or no new messages are received within the timeout.
+
+You need to use the group ID that can be used by several consumer in parallel. You can either generate one or use a random string.
+
+<Tabs
+  groupId="language"
+  defaultValue="python"
+  values={[
+    { label: 'Python', value: 'python', },
+    { label: 'C++', value: 'cpp', },
+    { label: 'C', value: 'c', },
+  ]
+}>
+<TabItem value="python">
+
+```python content="./examples/python/consume.py" snippetTag="consume"
 ```
 
+</TabItem>
+
+<TabItem value="cpp">
+
+```cpp content="./examples/cpp/consume.cpp" snippetTag="consume"
 ```
-$ cmake . && make
-$ ./asapo-consume
+
+</TabItem>
+
+<TabItem value="c">
+
+```c content="./examples/c/consume.c" snippetTag="consume"
+```
+
+</TabItem>
+
+</Tabs>
+
+After consuming the stream you can delete it.
+
+<Tabs
+  groupId="language"
+  defaultValue="python"
+  values={[
+    { label: 'Python', value: 'python', },
+    { label: 'C++', value: 'cpp', },
+    { label: 'C', value: 'c', },
+  ]
+}>
+<TabItem value="python">
+
+```python content="./examples/python/consume.py" snippetTag="delete"
+```
+
+</TabItem>
+
+<TabItem value="cpp">
+
+```cpp content="./examples/cpp/consume.cpp" snippetTag="delete"
 ```
 
 </TabItem>
 
 <TabItem value="c">
 
-```c content="./examples/c/consume.c"
+```c content="./examples/c/consume.c" snippetTag="delete"
+```
+
+</TabItem>
+
+</Tabs>
+
+<Tabs
+  groupId="language"
+  defaultValue="python"
+  values={[
+    { label: 'Python', value: 'python', },
+    { label: 'C++', value: 'cpp', },
+    { label: 'C', value: 'c', },
+  ]
+}>
+<TabItem value="python">
+For Python example just launch it with python interpreter (be sure that the ASAP::O client python modules are installed)
+
+```
+$ python3 consumer.py
+```
+
+</TabItem>
+
+<TabItem value="cpp">
+For C++ example you need to compiled it first. The easiest way to do it is by installing ASAP::O client dev packages and using the CMake find_package function. CMake will generate the makefile that you can then use to compile the example.
+
+The example CMake file can look like this
+
+```cmake content="./examples/cpp/CMakeLists.txt" snippetTag="#consumer"
+```
+
+You can use it like this
+
+```bash
+$ cmake . && make
+$ ./asapo-consume
 ```
 
+</TabItem>
+
+<TabItem value="c">
 Compile e.g. using Makefile and pkg-config (although we recommend CMake -  see C++ section) and execute. This example assumes asapo is installed to /opt/asapo. Adjust correspondingly.
 
 ```makefile content="./examples/c/Makefile" snippetTag="#consumer"
diff --git a/docs/site/docs/cookbook/simple-pipeline.mdx b/docs/site/docs/cookbook/simple-pipeline.mdx
index eb4ef682315fd87ba8433022516e445a5347f5f9..12e75f52367664c21f31b76893e7d6e48d94e69b 100644
--- a/docs/site/docs/cookbook/simple-pipeline.mdx
+++ b/docs/site/docs/cookbook/simple-pipeline.mdx
@@ -6,7 +6,34 @@ title: Simple Pipeline
 import Tabs from '@theme/Tabs';
 import TabItem from '@theme/TabItem';
 
-The consumer and a producer could be combined together in order to create pipelines. Look at the corresponding examples to learn about producers and consumers in detailes. Only the pipeline-related things will be explained here.
+The consumer and a producer could be combined together in order to create pipelines. Look at the corresponding examples to learn about producers and consumers in detailes.
+
+Here is the snippet that shows how to organize a pipelined loop. The full runnable example can be found in git repository.
+
+<Tabs
+  groupId="language"
+  defaultValue="python"
+  values={[
+    { label: 'Python', value: 'python', },
+    { label: 'C++', value: 'cpp', },
+  ]
+}>
+<TabItem value="python">
+
+```python content="./examples/python/pipeline.py" snippetTag="pipeline"
+```
+
+</TabItem>
+
+<TabItem value="cpp">
+
+```cpp content="./examples/cpp/pipeline.cpp" snippetTag="pipeline"
+```
+
+</TabItem>
+</Tabs>
+
+Just like with any produced stream, the pipelined stream can be marked as "finished". Here's the snippet that shows how to access the last message id in the stream.
 
 <Tabs
   groupId="language"
@@ -18,14 +45,14 @@ The consumer and a producer could be combined together in order to create pipeli
 }>
 <TabItem value="python">
 
-```python content="./examples/python/pipeline.py"
+```python content="./examples/python/pipeline.py" snippetTag="finish"
 ```
 
 </TabItem>
 
 <TabItem value="cpp">
 
-```cpp content="./examples/cpp/pipeline.cpp"
+```cpp content="./examples/cpp/pipeline.cpp" snippetTag="finish"
 ```
 
 </TabItem>
diff --git a/docs/site/docs/cookbook/simple-producer.mdx b/docs/site/docs/cookbook/simple-producer.mdx
index e31f65816cb4258568147d0902120d60baa91739..cf46baaf60125c4af32a5feb64e76316bacce31a 100644
--- a/docs/site/docs/cookbook/simple-producer.mdx
+++ b/docs/site/docs/cookbook/simple-producer.mdx
@@ -6,7 +6,9 @@ title: Simple Producer
 import Tabs from '@theme/Tabs';
 import TabItem from '@theme/TabItem';
 
-This example produces a simple message. Use this, if you don't need any special beam/message metadata.
+This example produces a simple message. This page provides snippets for simple producer for both Python and C++. You can go to BitBucket to see the whole example at once. The files there is a working example ready for launch.
+
+First step is to create an instance of the producer.
 
 <Tabs
   groupId="language"
@@ -18,28 +20,124 @@ This example produces a simple message. Use this, if you don't need any special
 }>
 <TabItem value="python">
 
-```python content="./examples/python/produce.py"
+```python content="./examples/python/produce.py" snippetTag="create"
 ```
 
-Execute it with python3
+</TabItem>
 
-```shell
-$ python3 produce.py
+<TabItem value="cpp">
+
+```cpp content="./examples/cpp/produce.cpp" snippetTag="create"
+```
+
+</TabItem>
+</Tabs>
+
+Then, we need to define a callback that would be used for sending. The callback is called when the message is actually sent, which may happen with a delay.
+
+<Tabs
+  groupId="language"
+  defaultValue="python"
+  values={[
+    { label: 'Python', value: 'python', },
+    { label: 'C++', value: 'cpp', },
+  ]
+}>
+<TabItem value="python">
+
+```python content="./examples/python/produce.py" snippetTag="callback"
 ```
 
 </TabItem>
 
 <TabItem value="cpp">
 
-```cpp content="./examples/cpp/produce.cpp"
+```cpp content="./examples/cpp/produce.cpp" snippetTag="callback"
+```
+
+</TabItem>
+</Tabs>
+
+Next we schedule the actual sending. This function call does not perform the actual sending, only schedules it. The sending will happen in background, and when it is done the callbeack will be called (if provided).
+
+<Tabs
+  groupId="language"
+  defaultValue="python"
+  values={[
+    { label: 'Python', value: 'python', },
+    { label: 'C++', value: 'cpp', },
+  ]
+}>
+<TabItem value="python">
+
+```python content="./examples/python/produce.py" snippetTag="send"
 ```
 
-Compile e.g. using CMake and execute. You might need to point cmake (with CMAKE_PREFIX_PATH) to asapo installation and curl library if installed to non-standard location.
+</TabItem>
 
-```shell content="./examples/cpp/CMakeLists.txt" snippetTag="#producer"
+<TabItem value="cpp">
+
+```cpp content="./examples/cpp/produce.cpp" snippetTag="send"
 ```
 
-```shell
+</TabItem>
+</Tabs>
+
+The sending of the messages will probably be done in a loop. After all the data is sent, some additional actions might be done. You may want to wait for all the background requests to be finished before doing something else or exiting the application.
+
+<Tabs
+  groupId="language"
+  defaultValue="python"
+  values={[
+    { label: 'Python', value: 'python', },
+    { label: 'C++', value: 'cpp', },
+  ]
+}>
+<TabItem value="python">
+
+```python content="./examples/python/produce.py" snippetTag="finish"
+```
+
+</TabItem>
+
+<TabItem value="cpp">
+
+```cpp content="./examples/cpp/produce.cpp" snippetTag="finish"
+```
+
+</TabItem>
+</Tabs>
+
+You can get the full example from BitBucket and test it locally.
+
+<Tabs
+  groupId="language"
+  defaultValue="python"
+  values={[
+    { label: 'Python', value: 'python', },
+    { label: 'C++', value: 'cpp', },
+  ]
+}>
+<TabItem value="python">
+For Python example just launch it with python interpreter (be sure that the ASAP::O client python modules are installed).
+
+```bash
+$ python3 produce.py
+```
+
+</TabItem>
+
+<TabItem value="cpp">
+For C++ example you need to compiled it first. The easiest way to do it is by installing ASAP::O client dev packages and using the CMake find_package function. CMake will generate the makefile that you can then use to compile the example.
+
+The example CMake file can look like this.
+
+```cmake content="./examples/cpp/CMakeLists.txt" snippetTag="#producer"
+```
+
+You can use it like this.
+
+```bash
 $ cmake . && make
 $ ./asapo-produce
 ```
@@ -47,4 +145,4 @@ $ ./asapo-produce
 </TabItem>
 </Tabs>
 
-You should see the "successfuly sent" message in the logs, and the file should appear in the corresponding folder (by default in ```/var/tmp/asapo/global_shared/data/test_facility/gpfs/test/2019/data/asapo_test```)
+You should see the "successfuly sent" message in the logs, and the file should appear in the corresponding folder (by default in ```/var/tmp/asapo/global_shared/data/test_facility/gpfs/test/2019/data/asapo_test```).
diff --git a/docs/site/docusaurus.config.js b/docs/site/docusaurus.config.js
index 7411889d76f2a52ca38d83f947cd101904f82255..5f2e66d52c04ab4e19e2f58901ebde843946d220 100644
--- a/docs/site/docusaurus.config.js
+++ b/docs/site/docusaurus.config.js
@@ -65,6 +65,9 @@ module.exports = {
       style: 'dark',
       copyright: `Copyright © ${new Date().getFullYear()} DESY. Built with Docusaurus.`,
     },
+    prism: {
+      additionalLanguages: ['cmake'],
+    },
   },
   presets: [
     [
diff --git a/docs/site/examples/c/consume.c b/docs/site/examples/c/consume.c
index 3038c22bf37ad44e84cd25a2a512f713472271cd..a29537c61fa0c950f4a4c29e55f6c10e634d3da4 100644
--- a/docs/site/examples/c/consume.c
+++ b/docs/site/examples/c/consume.c
@@ -18,6 +18,7 @@ int main(int argc, char* argv[]) {
     AsapoMessageMetaHandle mm = asapo_new_handle();
     AsapoMessageDataHandle data = asapo_new_handle();
 
+    /* create snippet_start */
     const char *endpoint = "localhost:8400";
     const char *beamtime = "asapo_test";
     const char *token = "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjk1NzE3MTAyMTYsImp0aSI6ImMzaXFhbGpmNDNhbGZwOHJua20wIiwic3ViIjoiYnRfYXNhcG9fdGVzdCIsIkV4dHJhQ2xhaW1zIjp7IkFjY2Vzc1R5cGVzIjpbIndyaXRlIiwicmVhZCJdfX0.dkWupPO-ysI4t-jtWiaElAzDyJF6T7hu_Wz_Au54mYU";
@@ -32,10 +33,12 @@ int main(int argc, char* argv[]) {
                                                          cred,
                                                          &err);
     asapo_free_handle(&cred);
+    /* create snippet_end */
 
     exit_if_error("Cannot create consumer", err);
     asapo_consumer_set_timeout(consumer, 5000ull);
 
+    /* consume snippet_start */
     AsapoStringHandle group_id = asapo_consumer_generate_new_group_id(consumer, &err);
     exit_if_error("Cannot create group id", err);
 
@@ -45,12 +48,13 @@ int main(int argc, char* argv[]) {
     printf("id: %llu\n", (unsigned long long)asapo_message_meta_get_id(mm));
     printf("file name: %s\n", asapo_message_meta_get_name(mm));
     printf("file content: %s\n", asapo_message_data_get_as_chars(data));
+    /* consume snippet_end */
 
-
-// delete stream
+    /* delete snippet_start */
     asapo_consumer_delete_stream(consumer,"default", 1,1,&err);
     exit_if_error("Cannot delete stream", err);
     printf("stream deleted\n");
+    /* delete snippet_end */
 
     asapo_free_handle(&err);
     asapo_free_handle(&mm);
diff --git a/docs/site/examples/cpp/acknowledgements.cpp b/docs/site/examples/cpp/acknowledgements.cpp
index d3613147dd2db1faf8ab03f34d5e0ad2ddf8935a..d8992ea60452fff6c5f15422cda94e658099ac1d 100644
--- a/docs/site/examples/cpp/acknowledgements.cpp
+++ b/docs/site/examples/cpp/acknowledgements.cpp
@@ -68,6 +68,7 @@ int main(int argc, char* argv[]) {
     auto group_id = consumer->GenerateNewGroupId(&err);
     exit_if_error("Cannot create group id", err);
 
+    // consume snippet_start
     asapo::MessageMeta mm;
     asapo::MessageData data;
 
@@ -88,7 +89,7 @@ int main(int argc, char* argv[]) {
             std::cout << "stream ended" << std::endl;
             break;
         }
-        exit_if_error("Cannot get next record", err);
+        exit_if_error("Cannot get next record", err); // snippet_end_remove
 
         // acknowledge all the messages except the ones in the set
         if (ids.find(mm.id) == ids.end()) {
@@ -111,18 +112,21 @@ int main(int argc, char* argv[]) {
             }
         }
     } while (1);
+    // consume snippet_end
 
+    // print snippet_start
     auto unacknowledgedMessages = consumer->GetUnacknowledgedMessages(group_id, 0, 0, "default", &err);
-    exit_if_error("Could not get list of messages", err);
+    exit_if_error("Could not get list of messages", err); // snippet_end_remove
 
     for (int i = 0; i < unacknowledgedMessages.size(); i++) {
         err = consumer->GetById(unacknowledgedMessages[i], &mm, &data, "default");
-        exit_if_error("Cannot get message", err);
+        exit_if_error("Cannot get message", err); // snippet_end_remove
 
         std::cout << "Unacknowledged message: " << reinterpret_cast<char const*>(data.get()) << std::endl;
         std::cout << "id: " << mm.id << std::endl;
         std::cout << "file name: " << mm.name << std::endl;
     }
+    // print snippet_end
 
     return EXIT_SUCCESS;
 }
diff --git a/docs/site/examples/cpp/consume.cpp b/docs/site/examples/cpp/consume.cpp
index aa55634693e10446dbe03f6313cec75215836272..f13db95c0520395cd781660459a4557374d1460b 100644
--- a/docs/site/examples/cpp/consume.cpp
+++ b/docs/site/examples/cpp/consume.cpp
@@ -12,6 +12,7 @@ void exit_if_error(std::string error_string, const asapo::Error& err) {
 int main(int argc, char* argv[]) {
     asapo::Error err;
 
+// create snippet_start
     auto endpoint = "localhost:8400";
     auto beamtime = "asapo_test";
 
@@ -23,38 +24,42 @@ int main(int argc, char* argv[]) {
                  "zIjpbIndyaXRlIiwicmVhZCJdfX0.dkWupPO-ysI4"
                  "t-jtWiaElAzDyJF6T7hu_Wz_Au54mYU";
 
-    //set it according to your configuration.
+    // set it according to your configuration.
     auto path_to_files = "/var/tmp/asapo/global_shared/data/test_facility/gpfs/test/2019/data/asapo_test";
 
-    auto credentials = asapo::SourceCredentials {
-        asapo::SourceType::kProcessed, // should be kProcessed or kRaw, kProcessed writes to the core FS
-        beamtime,                      // the folder should exist
-        "",                            // can be empty or "auto", if beamtime_id is given
-        "test_source",                 // source
-        token                          // athorization token
-    };
+    auto credentials = asapo::SourceCredentials
+            {
+                asapo::SourceType::kProcessed, // should be kProcessed or kRaw, kProcessed writes to the core FS
+                beamtime,                      // the folder should exist
+                "",                            // can be empty or "auto", if beamtime_id is given
+                "test_source",                 // source
+                token                          // athorization token
+            };
 
     auto consumer = asapo::ConsumerFactory::CreateConsumer
-                    (endpoint,
-                     path_to_files,
-                     true,             // True if the path_to_files is accessible locally, False otherwise
-                     credentials,      // same as for producer
-                     &err);
-
+        (endpoint,
+         path_to_files,
+         true,             // True if the path_to_files is accessible locally, False otherwise
+         credentials,      // same as for producer
+         &err);
+// create snippet_end
     exit_if_error("Cannot create consumer", err);
     consumer->SetTimeout(5000); // How long do you want to wait on non-finished stream for a message.
 
-    // you can get info about the streams in the beamtime
-    for (const auto& stream : consumer->GetStreamList("", asapo::StreamFilter::kAllStreams, &err)) {
+// list snippet_start
+    for (const auto& stream : consumer->GetStreamList("", asapo::StreamFilter::kAllStreams, &err))
+    {
         std::cout << "Stream name: " << stream.name << std::endl;
         std::cout << "LastId: " << stream.last_id << std::endl;
         std::cout << "Stream finished: " << stream.finished << std::endl;
         std::cout << "Next stream: " << stream.next_stream << std::endl;
     }
+// list snippet_end
 
+// consume snippet_start
     // Several consumers can use the same group_id to process messages in parallel
     auto group_id = consumer->GenerateNewGroupId(&err);
-    exit_if_error("Cannot create group id", err);
+    exit_if_error("Cannot create group id", err); // snippet_end_remove
 
     asapo::MessageMeta mm;
     asapo::MessageData data;
@@ -69,24 +74,25 @@ int main(int argc, char* argv[]) {
             std::cout << "stream finished" << std::endl;
             break;
         }
-
         if (err && err == asapo::ConsumerErrorTemplates::kEndOfStream) {
             // not-finished stream timeout, or wrong or empty stream
             std::cout << "stream ended" << std::endl;
             break;
         }
-
-        exit_if_error("Cannot get next record", err);
+        exit_if_error("Cannot get next record", err); // snippet_end_remove
 
         std::cout << "id: " << mm.id << std::endl;
         std::cout << "file name: " << mm.name << std::endl;
         std::cout << "message content: " << reinterpret_cast<char const*>(data.get()) << std::endl;
     } while (1);
+// consume snippet_end
 
+// delete snippet_start
     // you can delete the stream after consuming
     err = consumer->DeleteStream("default", asapo::DeleteStreamOptions{true, true});
-    exit_if_error("Cannot delete stream", err);
+    exit_if_error("Cannot delete stream", err); // snippet_end_remove
     std::cout << "stream deleted" << std::endl;
+// delete snippet_end
 
     return EXIT_SUCCESS;
 }
diff --git a/docs/site/examples/cpp/consume_dataset.cpp b/docs/site/examples/cpp/consume_dataset.cpp
index f869b9cd121231d4ad8ff3dd28c0ce5867242ab1..8b11aebedd40f84ff2a80b5b55df110e6858f068 100644
--- a/docs/site/examples/cpp/consume_dataset.cpp
+++ b/docs/site/examples/cpp/consume_dataset.cpp
@@ -33,6 +33,7 @@ int main(int argc, char* argv[]) {
     auto group_id = consumer->GenerateNewGroupId(&err);
     exit_if_error("Cannot create group id", err);
 
+    // dataset snippet_start
     asapo::DataSet ds;
     asapo::MessageData data;
 
@@ -48,18 +49,20 @@ int main(int argc, char* argv[]) {
             std::cout << "stream ended" << std::endl;
             break;
         }
-        exit_if_error("Cannot get next record", err);
+        exit_if_error("Cannot get next record", err); // snippet_end_remove
 
         std::cout << "Dataset Id: " << ds.id << std::endl;
 
-        for(int i = 0; i < ds.content.size(); i++) {
+        for(int i = 0; i < ds.content.size(); i++)
+        {
             err = consumer->RetrieveData(&ds.content[i], &data);
-            exit_if_error("Cannot get dataset content", err);
+            exit_if_error("Cannot get dataset content", err); // snippet_end_remove
 
             std::cout << "Part " << ds.content[i].dataset_substream << " out of " << ds.expected_size << std:: endl;
             std::cout << "message content: " << reinterpret_cast<char const*>(data.get()) << std::endl;
         }
     } while (1);
+    // dataset snippet_end
 
     err = consumer->DeleteStream("default", asapo::DeleteStreamOptions{true, true});
     exit_if_error("Cannot delete stream", err);
diff --git a/docs/site/examples/cpp/metadata.cpp b/docs/site/examples/cpp/metadata.cpp
index 2a9e5dda2e07c42852dd8fb41fc7b24fe3550d11..f63eb1d420f4d89705041e22823a1869e59d48ce 100644
--- a/docs/site/examples/cpp/metadata.cpp
+++ b/docs/site/examples/cpp/metadata.cpp
@@ -42,53 +42,59 @@ int main(int argc, char* argv[]) {
     exit_if_error("Cannot start producer", err);
     producer->SetLogLevel(asapo::LogLevel::Error);
 
+    // beamtime_set snippet_start
     // sample beamtime metadata. You can add any data you want, with any level of complexity
     // in this example we use strings and ints, and one nested structure
     auto beamtime_metadata = "{"
-                             "   \"name\": \"beamtime name\","
-                             "   \"condition\": \"beamtime condition\","
-                             "   \"intvalue1\": 5,"
-                             "   \"intvalue2\": 10,"
-                             "   \"structure\": {"
-                             "       \"structint1\": 20,"
-                             "       \"structint2\": 30"
-                             "   }"
-                             "}";
+    "   \"name\": \"beamtime name\","
+    "   \"condition\": \"beamtime condition\","
+    "   \"intvalue1\": 5,"
+    "   \"intvalue2\": 10,"
+    "   \"structure\": {"
+    "       \"structint1\": 20,"
+    "       \"structint2\": 30"
+    "   }"
+    "}";
 
     // send the metadata
     // with this call the new metadata will completely replace the one that's already there
     err = producer->SendBeamtimeMetadata(beamtime_metadata, asapo::MetaIngestMode{asapo::MetaIngestOp::kReplace, true}, &ProcessAfterSend);
+    // beamtime_set snippet_end
     exit_if_error("Cannot send metadata", err);
 
+    // beamtime_update snippet_start
     // we can update the existing metadata if we want, by modifying the existing fields, or adding new ones
     auto beamtime_metadata_update = "{"
-                                    "    \"condition\": \"updated beamtime condition\","
-                                    "    \"newintvalue\": 15"
-                                    "}";
+    "    \"condition\": \"updated beamtime condition\","
+    "    \"newintvalue\": 15"
+    "}";
 
     // send the metadata in the 'kUpdate' mode
     err = producer->SendBeamtimeMetadata(beamtime_metadata_update, asapo::MetaIngestMode{asapo::MetaIngestOp::kUpdate, true}, &ProcessAfterSend);
+    // beamtime_update snippet_end
     exit_if_error("Cannot send metadata", err);
 
+    // stream_set snippet_start
     // sample stream metadata
     auto stream_metadata = "{"
-                           "    \"name\": \"stream name\","
-                           "    \"condition\": \"stream condition\","
-                           "    \"intvalue\": 44"
-                           "}";
+    "    \"name\": \"stream name\","
+    "    \"condition\": \"stream condition\","
+    "    \"intvalue\": 44"
+    "}";
 
     // works the same way: for the initial set we use 'kReplace' the stream metadata, but update is also possible
     // update works exactly the same as for beamtime, but here we will only do 'kReplace'
-    err = producer->SendStreamMetadata(stream_metadata, asapo::MetaIngestMode{asapo::MetaIngestOp::kUpdate, true},
-                                       "default", &ProcessAfterSend);
+    err = producer->SendStreamMetadata(stream_metadata, asapo::MetaIngestMode{asapo::MetaIngestOp::kUpdate, true}, "default", &ProcessAfterSend);
+    // stream_set snippet_end
     exit_if_error("Cannot send metadata", err);
 
+    // message_set snippet_start
     // sample message metadata
     auto message_metadata = "{"
-                            "    \"name\": \"message name\","
-                            "    \"condition\": \"message condition\","
-                            "    \"somevalue\": 55"
-                            "}";
+    "    \"name\": \"message name\","
+    "    \"condition\": \"message condition\","
+    "    \"somevalue\": 55"
+    "}";
 
     std::string data_string = "hello";
     auto send_size = data_string.size() + 1;
@@ -99,6 +105,7 @@ int main(int argc, char* argv[]) {
     // in case of datasets each part has its own metadata
     asapo::MessageHeader message_header{1, send_size, "processed/test_file", message_metadata};
     err = producer->Send(message_header, std::move(buffer), asapo::kDefaultIngestMode, "default", &ProcessAfterSend);
+    // message_set snippet_end
     exit_if_error("Cannot send message", err);
 
     err = producer->WaitRequestsFinished(2000);
@@ -107,17 +114,21 @@ int main(int argc, char* argv[]) {
     auto consumer = asapo::ConsumerFactory::CreateConsumer(endpoint, path_to_files, true, credentials, &err);
     exit_if_error("Cannot start consumer", err);
 
+    // beamtime_get snippet_start
     // read the beamtime metadata
     auto beamtime_metadata_read = consumer->GetBeamtimeMeta(&err);
-    exit_if_error("Cannot get metadata", err);
+    exit_if_error("Cannot get metadata", err); // snippet_end_remove
 
     std::cout << "Updated beamtime metadata:" << std::endl << beamtime_metadata_read << std::endl;
+    // beamtime_get snippet_end
 
+    // stream_get snippet_start
     // read the stream metadata
     auto stream_metadata_read = consumer->GetStreamMeta("default", &err);
     exit_if_error("Cannot get metadata", err);
 
     std::cout << "Stream metadata:" << std::endl << stream_metadata_read << std::endl;
+    // stream_get snippet_end
 
     auto group_id = consumer->GenerateNewGroupId(&err);
     exit_if_error("Cannot create group id", err);
@@ -126,8 +137,10 @@ int main(int argc, char* argv[]) {
     asapo::MessageData data;
 
     do {
+        // message_get snippet_start
         err = consumer->GetNext(group_id, &mm, &data, "default");
 
+        // message_get snippet_start_remove
         if (err && err == asapo::ConsumerErrorTemplates::kStreamFinished) {
             std::cout << "stream finished" << std::endl;
             break;
@@ -138,10 +151,12 @@ int main(int argc, char* argv[]) {
             break;
         }
         exit_if_error("Cannot get next record", err);
+        // message_get snippet_end_remove
 
         std::cout << "Message #" << mm.id << std::endl;
         // our custom metadata is stored inside the message metadata
         std::cout << "Message metadata:" << std::endl << mm.metadata << std::endl;
+        // message_get snippet_end
     } while (1);
 
     return EXIT_SUCCESS;
diff --git a/docs/site/examples/cpp/next_stream.cpp b/docs/site/examples/cpp/next_stream.cpp
index 964689eec3a88a37285b833224dd8209b3bcbfb8..41ecf9f5f68c0b32841efae24e85ee9cb06ff578 100644
--- a/docs/site/examples/cpp/next_stream.cpp
+++ b/docs/site/examples/cpp/next_stream.cpp
@@ -55,8 +55,10 @@ int main(int argc, char* argv[]) {
         exit_if_error("Cannot send message", err);
     }
 
+    // next_stream_set snippet_start
     // finish the stream and set the next stream to be called 'next'
     producer->SendStreamFinishedFlag("default", 10, "next", &ProcessAfterSend);
+    // next_stream_set snippet_end
 
     // populate the 'next' stream as well
     for (uint64_t i = 1; i <= 5; i++) {
@@ -83,6 +85,7 @@ int main(int argc, char* argv[]) {
     asapo::MessageMeta mm;
     asapo::MessageData data;
 
+    // read_stream snippet_start
     // we start with the 'default' stream (the first one)
     std::string stream_name = "default";
 
@@ -114,10 +117,11 @@ int main(int argc, char* argv[]) {
             std::cout << "stream ended" << std::endl;
             break;
         }
-        exit_if_error("Cannot get next record", err);
+        exit_if_error("Cannot get next record", err); // snippet_end_remove
 
         std::cout << "Message #" << mm.id << ", message content: " << reinterpret_cast<char const*>(data.get()) << std::endl;
     } while (1);
+    // read_stream snippet_end
 
     return EXIT_SUCCESS;
 }
diff --git a/docs/site/examples/cpp/pipeline.cpp b/docs/site/examples/cpp/pipeline.cpp
index d1022cd8c1f7bf20afd1bbc846b7fc079e595286..c63c7ee5f41348d3726cc4715f015c3b61840f24 100644
--- a/docs/site/examples/cpp/pipeline.cpp
+++ b/docs/site/examples/cpp/pipeline.cpp
@@ -46,6 +46,7 @@ int main(int argc, char* argv[]) {
     auto group_id = consumer->GenerateNewGroupId(&err);
     exit_if_error("Cannot create group id", err);
 
+    // pipeline snippet_start
     // put the processed message into the new stream
     auto pipelined_stream_name = "pipelined";
 
@@ -65,7 +66,7 @@ int main(int argc, char* argv[]) {
             std::cout << "stream ended" << std::endl;
             break;
         }
-        exit_if_error("Cannot get next record", err);
+        exit_if_error("Cannot get next record", err); // snippet_end_remove
 
         // work on our data
         auto processed_string = std::string(reinterpret_cast<char const*>(data.get())) + " processed";
@@ -75,19 +76,21 @@ int main(int argc, char* argv[]) {
 
         // you may use the same filename, if you want to rewrite the source file. This will result in warning, but it is a valid usecase
         asapo::MessageHeader message_header{mm.id, send_size, std::string("processed/test_file_") + std::to_string(mm.id)};
-        err = producer->Send(message_header, std::move(buffer), asapo::kDefaultIngestMode, pipelined_stream_name,
-                             &ProcessAfterSend);
-        exit_if_error("Cannot send message", err);
+        err = producer->Send(message_header, std::move(buffer), asapo::kDefaultIngestMode, pipelined_stream_name, &ProcessAfterSend);
+        exit_if_error("Cannot send message", err); // snippet_end_remove
     } while (1);
+    // pipeline snippet_end
 
 
     err = producer->WaitRequestsFinished(2000);
     exit_if_error("Producer exit on timeout", err);
 
+    // finish snippet_start
     // the meta from the last iteration corresponds to the last message
     auto last_id = mm.id;
 
-    err = producer->SendStreamFinishedFlag("pipelined", last_id, "", &ProcessAfterSend);
+    err = producer->SendStreamFinishedFlag("pipelined",last_id, "", &ProcessAfterSend);
+    // finish snippet_end
     exit_if_error("Cannot finish stream", err);
 
     // you can remove the source stream if you do not need it anymore
diff --git a/docs/site/examples/cpp/produce.cpp b/docs/site/examples/cpp/produce.cpp
index aee2a9b74db8f5ae583b331f593032cceafc3c68..157d53d060d731827ec42c751f54f7aa310f268f 100644
--- a/docs/site/examples/cpp/produce.cpp
+++ b/docs/site/examples/cpp/produce.cpp
@@ -1,6 +1,7 @@
 #include "asapo/asapo_producer.h"
 #include <iostream>
 
+// callback snippet_start
 void ProcessAfterSend(asapo::RequestCallbackPayload payload, asapo::Error err) {
     if (err && err != asapo::ProducerErrorTemplates::kServerWarning) {
         // the data was not sent. Something is terribly wrong.
@@ -15,6 +16,7 @@ void ProcessAfterSend(asapo::RequestCallbackPayload payload, asapo::Error err) {
         return;
     }
 }
+// callback snippet_end
 
 void exit_if_error(std::string error_string, const asapo::Error& err) {
     if (err) {
@@ -24,18 +26,20 @@ void exit_if_error(std::string error_string, const asapo::Error& err) {
 }
 
 int main(int argc, char* argv[]) {
+// create snippet_start
     asapo::Error err;
 
     auto endpoint = "localhost:8400";
     auto beamtime = "asapo_test";
 
-    auto credentials = asapo::SourceCredentials {
-        asapo::SourceType::kProcessed, // should be kProcessed or kRaw, kProcessed writes to the core FS
-        beamtime,                      // the folder should exist
-        "",                            // can be empty or "auto", if beamtime_id is given
-        "test_source",                 // source
-        ""                             // athorization token
-    };
+    auto credentials = asapo::SourceCredentials
+            {
+                asapo::SourceType::kProcessed, // should be kProcessed or kRaw, kProcessed writes to the core FS
+                beamtime,                      // the folder should exist
+                "",                            // can be empty or "auto", if beamtime_id is given
+                "test_source",                 // source
+                ""                             // athorization token
+            };
 
     auto producer = asapo::Producer::Create(endpoint,
                                             1,                               // number of threads. Increase, if the sending speed seems slow
@@ -43,8 +47,10 @@ int main(int argc, char* argv[]) {
                                             credentials,
                                             60000,                           // timeout. Do not change.
                                             &err);
+// create snippet_end
     exit_if_error("Cannot start producer", err);
 
+// send snippet_start
     // the message must be manually copied to the buffer of the relevant size
     std::string to_send = "hello";
     auto send_size = to_send.size() + 1;
@@ -55,23 +61,26 @@ int main(int argc, char* argv[]) {
     asapo::MessageHeader message_header{1, send_size, "processed/test_file"};
     // use the default stream
     err = producer->Send(message_header, std::move(buffer), asapo::kDefaultIngestMode, "default", &ProcessAfterSend);
+// send snippet_end
     exit_if_error("Cannot send message", err);
 
     // send data in loop
 
     // add the following at the end of the script
 
+// finish snippet_start
     err = producer->WaitRequestsFinished(2000); // will synchronously wait for all the data to be sent.
-    // Use it when no more data is expected.
-    exit_if_error("Producer exit on timeout", err);
+                                                // Use it when no more data is expected.
+    exit_if_error("Producer exit on timeout", err); // snippet_end_remove
 
     // you may want to mark the stream as finished
     err = producer->SendStreamFinishedFlag("default",          // name of the stream.
                                            1,                  // the number of the last message in the stream
                                            "",                 // next stream or empty
                                            &ProcessAfterSend);
-    exit_if_error("Cannot finish stream", err);
+    exit_if_error("Cannot finish stream", err); // snippet_end_remove
     std::cout << "stream finished" << std::endl;
+// finish snippet_end
 
     return EXIT_SUCCESS;
 }
diff --git a/docs/site/examples/cpp/produce_dataset.cpp b/docs/site/examples/cpp/produce_dataset.cpp
index aaefeed40380abe86134488700ae632f4cf09f44..3de79fa422f994b8aeaf5eacc27f6903380cfa91 100644
--- a/docs/site/examples/cpp/produce_dataset.cpp
+++ b/docs/site/examples/cpp/produce_dataset.cpp
@@ -31,6 +31,7 @@ int main(int argc, char* argv[]) {
     auto producer = asapo::Producer::Create(endpoint, 1, asapo::RequestHandlerType::kTcp, credentials, 60000, &err);
     exit_if_error("Cannot start producer", err);
 
+    // dataset snippet_start
     std::string to_send = "hello dataset 1";
     auto send_size = to_send.size() + 1;
     auto buffer =  asapo::MessageData(new uint8_t[send_size]);
@@ -40,7 +41,7 @@ int main(int argc, char* argv[]) {
     asapo::MessageHeader message_header{1, send_size, "processed/test_file_dataset_1", "", 1, 3};
 
     err = producer->Send(message_header, std::move(buffer), asapo::kDefaultIngestMode, "default", &ProcessAfterSend);
-    exit_if_error("Cannot send message", err);
+    exit_if_error("Cannot send message", err); // snippet_end_remove
 
     // this can be done from different producers in any order
     // we do not recalculate send_size since we know it to be the same
@@ -51,7 +52,7 @@ int main(int argc, char* argv[]) {
 
     message_header.dataset_substream = 2;
     err = producer->Send(message_header, std::move(buffer), asapo::kDefaultIngestMode, "default", &ProcessAfterSend);
-    exit_if_error("Cannot send message", err);
+    exit_if_error("Cannot send message", err); // snippet_end_remove
 
     to_send = "hello dataset 3";
     buffer =  asapo::MessageData(new uint8_t[send_size]);
@@ -59,7 +60,8 @@ int main(int argc, char* argv[]) {
 
     message_header.dataset_substream = 3;
     err = producer->Send(message_header, std::move(buffer), asapo::kDefaultIngestMode, "default", &ProcessAfterSend);
-    exit_if_error("Cannot send message", err);
+    exit_if_error("Cannot send message", err); // snippet_end_remove
+    // dataset snippet_end
 
     err = producer->WaitRequestsFinished(2000);
     exit_if_error("Producer exit on timeout", err);
diff --git a/docs/site/examples/cpp/query.cpp b/docs/site/examples/cpp/query.cpp
index 243e8bd36b062f04307e83d34b370e590fac1f36..78370839e74ee95eb0b9f55fadfc5e7a77e2044e 100644
--- a/docs/site/examples/cpp/query.cpp
+++ b/docs/site/examples/cpp/query.cpp
@@ -61,9 +61,9 @@ int main(int argc, char* argv[]) {
     // let's start with producing some messages with metadata
     for (uint64_t i = 1; i <= 10; i++) {
         auto message_metadata = "{"
-                                "    \"condition\": \"condition #" + std::to_string(i) + "\","
-                                "    \"somevalue\": " + std::to_string(i * 10) +
-                                "}";
+        "    \"condition\": \"condition #" + std::to_string(i) + "\","
+        "    \"somevalue\": " + std::to_string(i * 10) +
+        "}";
 
         std::string to_send = "message#" + std::to_string(i);
         auto send_size = to_send.size() + 1;
@@ -82,33 +82,44 @@ int main(int argc, char* argv[]) {
     exit_if_error("Cannot create group id", err);
     consumer->SetTimeout(5000);
 
+    // by_id snippet_start
+    // simple query, same as GetById
     auto metadatas = consumer->QueryMessages("_id = 1", "default", &err);
+    // by_id snippet_end
     exit_if_error("Cannot query messages", err);
     std::cout << "Message with ID = 1" << std::endl;
     PrintMessages(metadatas, consumer);
 
+    // by_ids snippet_start
+    // the query that requests the range of IDs
     metadatas = consumer->QueryMessages("_id >= 8", "default", &err);
+    // by_ids snippet_end
     exit_if_error("Cannot query messages", err);
     std::cout << "essages with ID >= 8" << std::endl;
     PrintMessages(metadatas, consumer);
 
+    // string_equal snippet_start
+    // the query that has some specific requirement for message metadata
     metadatas = consumer->QueryMessages("meta.condition = \"condition #7\"", "default", &err);
+    // string_equal snippet_end
     exit_if_error("Cannot query messages", err);
     std::cout << "Message with condition = 'condition #7'" << std::endl;
     PrintMessages(metadatas, consumer);
 
+    // int_compare snippet_start
+    // the query that has several requirements for user metadata
     metadatas = consumer->QueryMessages("meta.somevalue > 30 AND meta.somevalue < 60", "default", &err);
+    // int_compare snippet_end
     exit_if_error("Cannot query messages", err);
     std::cout << "Message with 30 < somevalue < 60" << std::endl;
     PrintMessages(metadatas, consumer);
 
-    auto now = std::chrono::duration_cast<std::chrono::nanoseconds>
-               (std::chrono::system_clock::now().time_since_epoch()).count();
-    auto fifteen_minutes_ago = std::chrono::duration_cast<std::chrono::nanoseconds>((std::chrono::system_clock::now() -
-                               std::chrono::minutes(15)).time_since_epoch()).count();
-    std::cout << now << " " << fifteen_minutes_ago << std::endl;
-    metadatas = consumer->QueryMessages("timestamp < " + std::to_string(now) + " AND timestamp > " + std::to_string(
-                                            fifteen_minutes_ago), "default", &err);
+    // timestamp snippet_start
+    // the query that is based on the message's timestamp
+    auto now = std::chrono::duration_cast<std::chrono::nanoseconds>(std::chrono::system_clock::now().time_since_epoch()).count();
+    auto fifteen_minutes_ago = std::chrono::duration_cast<std::chrono::nanoseconds>((std::chrono::system_clock::now() - std::chrono::minutes(15)).time_since_epoch()).count();
+    metadatas = consumer->QueryMessages("timestamp < " + std::to_string(now) + " AND timestamp > " + std::to_string(fifteen_minutes_ago), "default", &err);
+    // timestamp snippet_end
     exit_if_error("Cannot query messages", err);
     std::cout << "Messages in the last 15 minutes" << std::endl;
     PrintMessages(metadatas, consumer);
diff --git a/docs/site/examples/python/acknowledgements.py b/docs/site/examples/python/acknowledgements.py
index f9eb7aba304a668d5f0af7cee5ab3761f9d7f88c..32d3055d059dd0e37d926dbf812bdf05694ef62d 100644
--- a/docs/site/examples/python/acknowledgements.py
+++ b/docs/site/examples/python/acknowledgements.py
@@ -35,6 +35,7 @@ group_id = consumer.generate_group_id()
 # the flag to separate the first attempt for message #3
 firstTryNegative = True
 
+# consume snippet_start
 try:
     while True:
         data, meta = consumer.get_next(group_id, meta_only = False)
@@ -64,7 +65,10 @@ except asapo_consumer.AsapoStreamFinishedError:
 
 except asapo_consumer.AsapoEndOfStreamError:
     print('stream ended')
+# consume snippet_end
 
+# print snippet_start
 for message_id in consumer.get_unacknowledged_messages(group_id):
     data, meta = consumer.get_by_id(message_id, meta_only = False)
     print('Unacknowledged message:', data.tobytes().decode("utf-8"), meta)
+# print snippet_end
diff --git a/docs/site/examples/python/consume.py b/docs/site/examples/python/consume.py
index dfddaf8d46492b2c9a218cea135797535f2fab78..6180fef9ee83e0a37d85ae338d322fcc5248a41a 100644
--- a/docs/site/examples/python/consume.py
+++ b/docs/site/examples/python/consume.py
@@ -1,5 +1,6 @@
 import asapo_consumer
 
+#create snippet_start
 endpoint = "localhost:8400"
 beamtime = "asapo_test"
 
@@ -22,20 +23,20 @@ consumer = asapo_consumer \
                                  "test_source",  # Same as for the producer
                                  token,          # Access token
                                  5000)           # Timeout. How long do you want to wait on non-finished stream for a message.
+#create snippet_end
 
-
-# you can get info about the streams in the beamtime
+#list snippet_start
 for stream in consumer.get_stream_list():
     print("Stream name: ", stream['name'], "\n",
           "LastId: ", stream['lastId'], "\n",
           "Stream finished: ", stream['finished'], "\n",
           "Next stream: ", stream['nextStream'])
+#list snippet_end
 
-
+#consume snippet_start
 group_id = consumer.generate_group_id() # Several consumers can use the same group_id to process messages in parallel
 
 try:
-
     # get_next is the main function to get messages from streams. You would normally call it in loop.
     # you can either manually compare the meta['_id'] to the stream['lastId'], or wait for the exception to happen
     while True:
@@ -44,8 +45,11 @@ try:
 
 except asapo_consumer.AsapoStreamFinishedError:
     print('stream finished') # all the messages in the stream were processed
-        
+
 except asapo_consumer.AsapoEndOfStreamError:
     print('stream ended')    # not-finished stream timeout, or wrong or empty stream
+#consume snippet_end
 
+#delete snippet_start
 consumer.delete_stream(error_on_not_exist = True) # you can delete the stream after consuming
+#delete cnippet_end
diff --git a/docs/site/examples/python/consume_dataset.py b/docs/site/examples/python/consume_dataset.py
index cc81a95d33382087698f217c926041432f435de2..8ed7711d784c5e8b5ef65f99cdeed5846d5ebbec 100644
--- a/docs/site/examples/python/consume_dataset.py
+++ b/docs/site/examples/python/consume_dataset.py
@@ -18,11 +18,11 @@ consumer = asapo_consumer.create_consumer(endpoint, path_to_files, True, beamtim
 
 group_id = consumer.generate_group_id()
 
+# dataset snippet_start
 try:
-
     # get_next_dataset behaves similarly to the regular get_next
     while True:
-        dataset = consumer.get_next_dataset(group_id, stream = 'pipelined')
+        dataset = consumer.get_next_dataset(group_id, stream = 'default')
         print ('Dataset Id:', dataset['id'])
         # the initial response only contains the metadata
         # the actual content should be retrieved separately
@@ -33,6 +33,7 @@ try:
 
 except asapo_consumer.AsapoStreamFinishedError:
     print('stream finished')
-        
+
 except asapo_consumer.AsapoEndOfStreamError:
     print('stream ended')
+# dataset snippet_end
diff --git a/docs/site/examples/python/metadata.py b/docs/site/examples/python/metadata.py
index e2860c1653a7bc5d60837789b2e9cb5455b101bf..11346dc0890c59e9950655fc2e8e1fec0c6331a6 100644
--- a/docs/site/examples/python/metadata.py
+++ b/docs/site/examples/python/metadata.py
@@ -25,6 +25,7 @@ path_to_files = "/var/tmp/asapo/global_shared/data/test_facility/gpfs/test/2019/
 producer = asapo_producer.create_producer(endpoint, 'processed', beamtime, 'auto', 'test_source', '', 1, 60000)
 producer.set_log_level('error')
 
+# beamtime_set snippet_start
 # sample beamtime metadata. You can add any data you want, with any level of complexity
 # in this example we use strings and ints, and one nested structure
 beamtime_metadata = {
@@ -41,7 +42,9 @@ beamtime_metadata = {
 # send the metadata
 # by default the new metadata will completely replace the one that's already there
 producer.send_beamtime_meta(json.dumps(beamtime_metadata), callback = callback)
+# beamtime_set snippet_end
 
+# beamtime_update snippet_start
 # we can update the existing metadata if we want, by modifying the existing fields, or adding new ones
 beamtime_metadata_update = {
     'condition': 'updated beamtime condition',
@@ -50,7 +53,9 @@ beamtime_metadata_update = {
 
 # send the metadata in the 'update' mode
 producer.send_beamtime_meta(json.dumps(beamtime_metadata_update), mode = 'update', callback = callback)
+# beamtime_update snippet_end
 
+# stream_set snippet_start
 # sample stream metadata
 stream_metadata = {
     'name': 'stream name',
@@ -61,7 +66,9 @@ stream_metadata = {
 # works the same way: by default we replace the stream metadata, but update is also possible
 # update works exactly the same as for beamtime, but here we will only do 'replace'
 producer.send_stream_meta(json.dumps(stream_metadata), callback = callback)
+# stream_set snippet_end
 
+# message_set snippet_start
 # sample message metadata
 message_metadata = {
     'name': 'message name',
@@ -72,11 +79,13 @@ message_metadata = {
 # the message metadata is sent together with the message itself
 # in case of datasets each part has its own metadata
 producer.send(1, "processed/test_file", b'hello', user_meta = json.dumps(message_metadata), stream = "default", callback = callback)
+# message_set snippet_end
 
 producer.wait_requests_finished(2000)
 
 consumer = asapo_consumer.create_consumer(endpoint, path_to_files, True, beamtime, "test_source", token, 5000)
 
+# beamtime_get snippet_start
 # read the beamtime metadata
 beamtime_metadata_read = consumer.get_beamtime_meta()
 
@@ -86,7 +95,9 @@ print('Condition:', beamtime_metadata_read['condition'])
 print('Updated value exists:', 'newintvalue' in beamtime_metadata_read)
 print('Sum of int values:', beamtime_metadata_read['intvalue1'] + beamtime_metadata_read['intvalue2'])
 print('Nested structure value', beamtime_metadata_read['structure']['structint1'])
+# beamtime_get snippet_end
 
+# stream_get snippet_start
 # read the stream metadata
 stream_metadata_read = consumer.get_stream_meta(stream = 'default')
 
@@ -94,10 +105,12 @@ stream_metadata_read = consumer.get_stream_meta(stream = 'default')
 print('Stream Name:', stream_metadata_read['name'])
 print('Stream Condition:', stream_metadata_read['condition'])
 print('Stream int value:', stream_metadata_read['intvalue'])
+# stream_get snippet_end
 
 group_id = consumer.generate_group_id()
 try:
     while True:
+        # message_get snippet_start
         # right now we are only interested in metadata
         data, meta = consumer.get_next(group_id, meta_only = True)
         print('Message #', meta['_id'])
@@ -107,6 +120,7 @@ try:
         print('Message Name:', message_metadata_read['name'])
         print('Message Condition:', message_metadata_read['condition'])
         print('Message int value:', message_metadata_read['somevalue'])
+        # message_get snippet_end
 except asapo_consumer.AsapoStreamFinishedError:
     print('stream finished')
 
diff --git a/docs/site/examples/python/next_stream.py b/docs/site/examples/python/next_stream.py
index ff47e8645ded2606f8583f847af997a534471fce..d88638185b1d64f81023461d59111409c2c40af6 100644
--- a/docs/site/examples/python/next_stream.py
+++ b/docs/site/examples/python/next_stream.py
@@ -27,8 +27,10 @@ producer.set_log_level('error')
 for i in range(1, 11):
     producer.send(i, "processed/test_file_" + str(i), ('content of the message #' + str(i)).encode(), stream = 'default', callback = callback)
 
+# next_stream_set snippet_start
 # finish the stream and set the next stream to be called 'next'
 producer.send_stream_finished_flag('default', i, next_stream = 'next', callback = callback)
+# next_stream_set snippet_end
 
 # populate the 'next' stream as well
 for i in range(1, 6):
@@ -41,6 +43,7 @@ producer.wait_requests_finished(2000)
 consumer = asapo_consumer.create_consumer(endpoint, path_to_files, True, beamtime, "test_source", token, 5000)
 group_id = consumer.generate_group_id()
 
+# read_stream snippet_start
 # we start with the 'default' stream (the first one)
 stream_name = 'default'
 
@@ -66,3 +69,4 @@ while True:
     except asapo_consumer.AsapoEndOfStreamError:
         print('stream ended')
         break
+# read_stream snippet_end
diff --git a/docs/site/examples/python/pipeline.py b/docs/site/examples/python/pipeline.py
index 71c782032f2df69e5e64d514e8afadfb8ebcf05c..c2f8152b09dc6951db8a46dcb6396944e6cb1a71 100644
--- a/docs/site/examples/python/pipeline.py
+++ b/docs/site/examples/python/pipeline.py
@@ -25,7 +25,7 @@ consumer = asapo_consumer.create_consumer(endpoint, path_to_files, True, beamtim
 producer = asapo_producer.create_producer(endpoint, 'processed', beamtime, 'auto', 'test_source', '', 1, 60000)
 
 group_id = consumer.generate_group_id()
-
+# pipeline snippet_start
 # put the processed message into the new stream
 pipelined_stream_name = 'pipelined'
 
@@ -48,13 +48,15 @@ except asapo_consumer.AsapoStreamFinishedError:
         
 except asapo_consumer.AsapoEndOfStreamError:
     print('stream ended')
-
+# pipeline snippet_end
 producer.wait_requests_finished(2000)
 
+# finish snippet_start
 # the meta from the last iteration corresponds to the last message
 last_id = meta['_id']
 
 producer.send_stream_finished_flag("pipelined", last_id)
+# finish snippet_end
 
 # you can remove the source stream if you do not need it anymore
 consumer.delete_stream(stream = 'default', error_on_not_exist = True)
diff --git a/docs/site/examples/python/produce.py b/docs/site/examples/python/produce.py
index 4d03ccb9a0009ebed824396037e521d91686b9f3..262015b25fd99be947f1756222d6a74a1bb54acb 100644
--- a/docs/site/examples/python/produce.py
+++ b/docs/site/examples/python/produce.py
@@ -1,5 +1,6 @@
 import asapo_producer
 
+# callback snippet_start
 def callback(payload,err):
     if err is not None and not isinstance(err, asapo_producer.AsapoServerWarning):
         # the data was not sent. Something is terribly wrong.
@@ -10,7 +11,9 @@ def callback(payload,err):
     else:
         # all fine
         print("successfuly sent: ",payload)
+# callback snippet_end
 
+# create snippet_start
 endpoint = "localhost:8400"
 beamtime = "asapo_test"
 
@@ -25,20 +28,24 @@ producer = asapo_producer \
                                  60000)          # timeout. Do not change.
 
 producer.set_log_level("error") # other values are "warning", "info" or "debug".
+# create snippet_end
 
+# send snippet_start
 # we are sending a message with with index 1 to the default stream. Filename must start with processed/
 producer.send(1,                     # message number. Should be unique and ordered.
               "processed/test_file", # name of the file. Should be unique, or it will be overwritten
               b"hello",              # binary data
               callback = callback)   # callback
-
+# send snippet_end
 # send data in loop
 
 # add the following at the end of the script
 
+# finish snippet_start
 producer.wait_requests_finished(2000) # will synchronously wait for all the data to be sent.
                                       # Use it when no more data is expected.
 
 # you may want to mark the stream as finished
 producer.send_stream_finished_flag("default", # name of the stream. If you didn't specify the stream in 'send', it would be 'default'
                                    1)         # the number of the last message in the stream
+# finish snippet_end
diff --git a/docs/site/examples/python/produce_dataset.py b/docs/site/examples/python/produce_dataset.py
index ffaae34e1a117762ab161fd949f6a42f50bcfda7..106229c430b979bba1e547fcc1fca4c1de4a7eb1 100644
--- a/docs/site/examples/python/produce_dataset.py
+++ b/docs/site/examples/python/produce_dataset.py
@@ -13,6 +13,7 @@ beamtime = "asapo_test"
 
 producer = asapo_producer.create_producer(endpoint, 'processed', beamtime, 'auto', 'test_source', '', 1, 60000)
 
+# dataset snippet_start
 #assuming we have three different producers for a single dataset
 
 # add the additional 'dataset' paremeter, which should be (<part_number>, <total_parts_in_dataset>)
@@ -20,6 +21,7 @@ producer.send(1, "processed/test_file_dataset_1", b"hello dataset 1", dataset =
 # this can be done from different producers in any order
 producer.send(1, "processed/test_file_dataset_1", b"hello dataset 2", dataset = (2,3), callback = callback)
 producer.send(1, "processed/test_file_dataset_1", b"hello dataset 3", dataset = (3,3), callback = callback)
+# dataset snippet_end
 
 producer.wait_requests_finished(2000)
 # the dataset parts are not counted towards the number of messages in the stream
diff --git a/docs/site/examples/python/query.py b/docs/site/examples/python/query.py
index d3d61f7471f6af09cb4c3b0406f7d8851e6d8ec1..83cfb51c9ef8dc00cd8e518fd5869f6e557453f5 100644
--- a/docs/site/examples/python/query.py
+++ b/docs/site/examples/python/query.py
@@ -46,31 +46,41 @@ def print_messages(metadatas):
         data = consumer.retrieve_data(meta)
         print('Message #', meta['_id'], ', content:', data.tobytes().decode("utf-8"), ', usermetadata:', meta['meta'])
 
+# by_id snippet_start
 # simple query, same as get_by_id
 metadatas = consumer.query_messages('_id = 1')
+# by_id snippet_end
 print('Message with ID = 1')
 print_messages(metadatas)
 
+# by_ids snippet_start
 # the query that requests the range of IDs
 metadatas = consumer.query_messages('_id >= 8')
+# by_ids snippet_end
 print('Messages with ID >= 8')
 print_messages(metadatas)
 
+# string_equal snippet_start
 # the query that has some specific requirement for message metadata
 metadatas = consumer.query_messages('meta.condition = "condition #7"')
+# string_equal snippet_end
 print('Message with condition = "condition #7"')
 print_messages(metadatas)
 
+# int_compare snippet_start
 # the query that has several requirements for user metadata
 metadatas = consumer.query_messages('meta.somevalue > 30 AND meta.somevalue < 60')
+# int_compare snippet_end
 print('Message with 30 < somevalue < 60')
 print_messages(metadatas)
 
+# timestamp snippet_start
 # the query that is based on the message's timestamp
 now = datetime.now()
 fifteen_minutes_ago = now - timedelta(minutes = 15)
 # python uses timestamp in seconds, while ASAP::O in nanoseconds, so we need to multiply it by a billion
 metadatas = consumer.query_messages('timestamp < {} AND timestamp > {}'.format(now.timestamp() * 10**9, fifteen_minutes_ago.timestamp() * 10**9))
+# timestamp snippet_end
 print('Messages in the last 15 minutes')
 print_messages(metadatas)
 
diff --git a/docs/site/src/theme/CodeBlock.tsx b/docs/site/src/theme/CodeBlock.tsx
index f6046c51d9f1ae51e3e6e5bd0c9fee7548769de5..4742c44e4a577aa282fed89a2da933b0ef185cac 100644
--- a/docs/site/src/theme/CodeBlock.tsx
+++ b/docs/site/src/theme/CodeBlock.tsx
@@ -22,29 +22,36 @@ function ReferenceCode(props: any) {
         );
     }
     codeBlockContent = codeBlockContent.replace(/"/g,'')
-    
+
     const urlLink = "https://stash.desy.de/projects/ASAPO/repos/asapo/browse/docs/site/" + codeBlockContent
 
     let snippetTag = props.snippetTag
     if (snippetTag !== undefined) {
         snippetTag = snippetTag.replace(/"/g,'')
     }
-    
+
     if (codeBlockContent) {
         const res = requireContext(codeBlockContent)
         let body = res.default.split('\n')
-        const fromLine = body.indexOf(snippetTag + " snippet_start") + 1;
-        const toLine = body.indexOf(snippetTag + " snippet_end", fromLine) - 1;
+        const fromLine = body.findIndex(s => s.includes(snippetTag + " snippet_start") &&
+                                            !s.includes(snippetTag + " snippet_start_remove")) + 1;
+        const toLine = body.findIndex(s => s.includes(snippetTag + " snippet_end") &&
+                                          !s.includes(snippetTag + " snippet_end_remove"), fromLine) - 1;
         if (fromLine > 0) {
             body = body.slice(fromLine, (toLine>-1?toLine:fromLine) + 1)
         }
-        const fromLineRemove = body.indexOf(snippetTag + " snippet_start_remove");
-        const toLineRemove = body.indexOf(snippetTag + " snippet_end_remove", fromLineRemove);
+        const fromLineRemove = body.findIndex(s => s.includes(snippetTag + " snippet_start_remove"));
+        const toLineRemove = body.findIndex(s => s.includes(snippetTag + " snippet_end_remove"), fromLineRemove);
         if (fromLineRemove>-1) {
             body.splice(fromLineRemove, toLineRemove>-1?toLineRemove-fromLineRemove + 1:2)
         }
         body = body.filter(a => !a.includes("snippet_start_remove") && !a.includes("snippet_end_remove"))
-        body = body.join('\n')
+
+        // calculate the minimum number of spaces in non-empty lines
+        let leadingSpaces = Math.min(...(body.filter(s => s.trim().length > 0).map(s => s.search(/\S|$/))));
+
+        // remove the leading spaces (un-indent the code snippet if it was indented)
+        body = body.map(s => s.trim().length > 0 ? s.slice(leadingSpaces) : "").join('\n')
 
 
         const customProps = {