You should see the "successfuly sent" message in the logs, and the file should appear in the corresponding folder (by default for the docker conatiner in ```/tmp/asapo/receiver/files/test_facility/gpfs/test/2019/data/asapo_test/processed/```).
You should see the "successfuly sent" message in the logs, and the file should appear in the corresponding folder (for the ASAPO standalone docker container in ```/tmp/asapo/receiver/files/test_facility/gpfs/test/2019/data/asapo_test/processed/``` by default).
@@ -13,7 +13,7 @@ This is the top level. Contains all data collected/produced during a single beam
During a beamtime, data can be produced by different sources. For example, a detector is a data source, if multiple detectors are used during an experiment, they can be different data sources or the same data source (more details below in datasets section). A user application that simulates or analyses data can also act as an ASAPO data source. Each data source has its own unique name within a beamtime.
#### Data Stream
Each data source can emit multiple data streams. Each stream has a unique id within a specific data source name.
Each data source can emit multiple data streams. For each specific data source name, stream names must be unique.
#### Message
Data streams consist of smaller entities - messages. The content of a message is quite flexible, to be able to cover a broad amount of use cases. Usually it is metadata and some binary data (e.g. a detector image, or an hdf5 file with multiple images). At the moment ASAPO itself is agnostic to the data and sees it as a binary array. Later some specific cases might be handled as well (the most prominent use case - an hdf5 file with multiple images).