Stat Aggregation : Hardware Monitoring - Data Integration
  

Hardware Monitoring - Data Integration

The flexible plug in architecture implemented in Orchestra, where plug ins are written as transformation jobs, as described in “Stat PDI Jobs/Transformations - Example Configuration” , makes it possible to react to events from the Hardware Monitoring functionality. For more information, see “HW Monitoring” .
The default installation does not come with any plug ins activated, but the process of activating your own plug ins is simple. It is described below:
1. Create and test your transformation in Pentaho PDI.
Pentaho PDI contains a visual editor for developing jobs. You can also use or customize the official job examples, found on Qmatic’s official github account. 
2. Deploy the resulting .ktr file 
The .ktr file is the definition of the transformation/job into the Orchestra deployment.
The standard location for jobs are at <installation_directory>/conf/stat-jobs/qmatic but you can also use the <installation_directory>/conf/stat-jobs/custom folder.
3. Define which events the job should trigger on.
This is done by adding a few lines at the end of the file <installation_directory>/system/conf/stat-jobs.xml.
Below is an example which triggers the job on all printer-related events (you need to replace the filename with your .ktr-file location):
 
```
<job filename="qmatic/alert-on-disconnect-rest.ktr">
        <event name="DISCONNECT"/>
        <event name="PAPER_JAM"/>
        <event name="OUT_OF_PAPER"/>
</job>
```
 
4. Restart Orchestra central.
After the job has been registered, you can change the content in the kettle-job-file, during runtime. The stat-jobs.xml, however, always needs a restart to register changes.