Page tree
Skip to end of metadata
Go to start of metadata

About this module

This module shows you how to manage data collisions caused by multiple data sources, such a .csv and .xlsx file stored on Dropbox, synchronising data to a SQL Server database.
A data collision is when two data sources contain the same fields with different values and import to the same database table. On import, the data imported last overwrites the previous values in the database. As the last data imported is not always the most up-to-date information, you need to identify the data source that is your single point of truth, to ensure that the data in the database table is accurate.
So what does this module cover? Well, the scenario is this:

In Module 1, Harmony, a member of the HR team, was required to capture all company staff records in a SQL database and ensure that all changes to staff information was subsequently updated in the database. 
 
 Harmony does not have any SQL database experience but does know how to work with a .csv file. So, Harmony was happy to add staff information to a .csv file but asked Bob from the IT Department to assist her with importing the data to the database. As Harmony works from a number of different global locations, the solution was to enable Harmony to add staff records to the HR.csv file on Dropbox, and Bob would then use Universal Platform to import the data to the database. 
 
 At the end of Module 1, Harmony and Bob had successfully imported the data in HR.csv to the SQL database, using Universal Platform. 
 
 In Module 2, Harmony and Bob were required to manage any changes to staff records in HR.csv and ensure only the updated information is imported to the database. At the end of Module 2, Harmony and Bob had successfully imported only changes to the data to the database, using Universal Platform's constraints. 
 
 In Module 3, Harmony and Bob configured scheduled jobs to ensure that changes to the data in HR.csv are automatically imported to the database at set intervals. 
 
 As Harmony is a member of the HR team, she is able to manage certain information in HR.csv, such as staff member names and if they are still employed at the company. However, Harmony is not always informed if contact information changes, such as if a staff member changes their mobile telephone number. 
 
 Devan from the Facilities Department is responsible for ensuring that all staff personal and company contact details are updated in the company global address book. This includes both mobile and desk telephone contact numbers. As Devan is not a member of the HR team, he is not involved with staff personal information, such as if a staff member gets married and changes their last name, or if they are no longer employed at the company. To manage staff contact information, Devan creates a Microsoft Excel spreadsheet named Phone_list.xlsx
 
 As the company requires all staff records to be captured in a SQL database, Devan asked Bob from the IT Department to assist him with importing the data to the database. Since Bob has configured a solution for importing HR.csv to the database for Harmony, he saves Phone_list.xlsx to Dropbox so that he can use the same Universal Platform functionality to import the data from the spreadsheet to the database and ensure that the .xlsx file and database is synchronised with any information updates. 
 
 Since there are now two files containing staff information that are required to be imported to the database, Harmony, Bob and Devan now need to find a solution so that newer data in the database is not overwritten by older data. This applies specifically to the fact that the staff contact information Devan updates in Phone_list.xlsx is always more up-to-date than the contact information Harmony has in HR.csv. Bob recommends using Universal Platform's Single Point of Truth (SPOT) feature to facilitate this. 
 
 Your task is to help Bob by following Module 4 to ensure that changes to the data in Phone_List.csv is automatically imported to the database and that changes to the data in HR.csv that is automatically imported to the database according to a set schedule, does not overwrite this information.


Module 4 of this tutorial explains how to get data from a spreadsheet and CSV file in DropBox into a SQL table and set up SPOTs (Single Points of Truth) to specify the data source that has precedence over the other data sources.

  • You will start by creating the data model of the .xlsx file and map the .xlsx columns to the SQL table columns.
  • You will then configure a pipeline to move the data from the .xlsx file into the database table.
  • You will then use the pipeline to fetch the file and then read the file. Reading the file ensures that you have successfully connected to the file.
  • You will query the database table to ensure the data is in the table.
  • Once your .xlsx data source is configured, you will create the SPOTs for the different data sources.
  • You will then assign the SPOTs to the relevant data models.
  • Next you will configure the data from the .csv file and the .xlsx file that has precedence in your database table to ensure data is not continually overwritten by each data source.
  • You will complete the module by querying the database table to see how it changes when you run alternate pipelines.

 

 Creating data models

In Module 1 of this training program you created two data models:

  • DataModel_CSV – this data model specifies the fields contained in the HR.csv file.
  • DataModel_DataHub – this data model specifies the fields contained in the DataHub database table.

In this module, you will also import data from a Microsoft Excel spreadsheet .xlsx file to the DataHub database table. Since you have not yet worked with .xlsx files in this tutorial, you must begin by configuring a data model for the spreadsheet.

 Creating a .xlsx file data model

To create a data model:

  1. Open Data Manager in a browser tab.
  2. Select Data Models. The Data Models screen is displayed.
  3. Click Add.
  4. On the General tab, enter a name for the data model in the format: DataModel_XLSX.
  5. Click Save.



  6. Select the Model tab.
  7. Click Add Entity to add a data model to the editor.
  8. Select the New Entity data model block on the editor to view the entity details.
  9. Enter the following: 
    • Name: Phone_List
    • Title: Phone_List
    • Type: XLSX
    • Owner: Your Name

  10. Click Save. Your entity is renamed to Phone_List on the editor.
  11. Select the Phone_List entity.
  12. Select the Attributes tab and add the following:
    • Employee_Number
    • First_Name
    • Last_Name
    • Mobile_Number
    • Desk_Phone 
      Note:Attribute names cannot contain spaces or special characters, such as @ and '. If you copy-and-paste the attributes, please ensure that any spaces are removed.

  13. Configure the attributes' Type as follows:
    • Employee_Number: Int
    • First_Name: String
    • Last_Name: String
    • Mobile_Number: String
    • Desk_Phone: String

  14. Click Save & Close.

 Creating a data format

The same as you did for a .csv file in Module 1 of this tutorial, when you import data from a .xlsx file to a database, you use a data format to indicate the relationship of data between the file and the database table.

A data format tells Universal Platform what columns in the file matches the columns in the database table, defined in the data format.
For example, Phone_list.xlsx contains the data as shown below. Each column in the spreadsheet must have a corresponding field in the database table. The column headers defined in the .xlsx file are:

  • Employee_Number
  • First_Name
  • Last_Name
  • Mobile_Number
  • Desk_Phone



To configure a data format for importing data from an .xlsx file to a SQL Server database:

  1. Open Interface Manager in a browser tab.
  2. Select Data Formats. The Data Formats screen is displayed.
  3. Click Add.
  4. On the General tab, enter a name for the data format in the format: DataFormat_XLSX.
  5. Select the DataModel_XLSX data model you created when you completed the Creating an .xlsx file data model task, from the Source Data Model list in the Data Format section.
  6. Select the DataModel_DataHub data model you created when you completed Module 1 of this tutorial, from the Target Model Data list in the Data Format section.
  7. Click Save.



  8. Select the Format tab.
  9. Drag-and-drop the Phone_List source data entity on to the data format editor. This is the entity you created in DataModel_XLSX.
  10. Drag-and-drop the DataHub target data entity on to the data format editor. This is the entity you created in DataModel_DataHub in Module 1 of this tutorial.
  11. Map the .xlsx fields to the database table columns as follows:
    • Desk_Phone >> Desk_Phone
    • Employee_Number >> EmployeeNo
    • First_Name >> FName
    • Last_Name >> LName
    • Mobile_Number >> Mobile_Number
  12. Click Save & Close.

Now that you have created the data format, your next step is to configure the pipeline that will import data from the .xlsx file into the database.

 Configuring the pipeline

In Module 1 of this tutorial you configured a pipeline to get, read and save data from the HR.csv file on Dropbox to your SQL database table.
Now in order to populate your database table with the data in Phone_list.xlsx, you must also configure a pipeline for this file that uses the Dropbox connector you created in Module 1. 
To create a pipeline:

  1. Go to the browser tab that has Platform Manager open.
  2. Select Pipelines. The Pipelines screen is displayed.
  3. Click Add.
  4. On the General tab, enter a name for the pipeline in the format: Pipeline_XLSX.
  5. Click Save.

    To begin programming, you must select the Dropbox connector you created:


  6. Select the Editor tab. The pipeline editor may take some time to load, so don't run away!
  7. Enter the name of the Dropbox connector you created when you completed Module 1 of this tutorial: DropboxConnector_TrainingAccount, in the filter… field.

    When you search for connectors, the results are displayed in groups - Universal and Connectors. The Universal group displays the system default connectors and the Connectors group displays custom connectors. Since you are looking for the connector you created, you must use the Connectors group.

  8. Click Connectors in the list. A list of functions you can perform with the connector is displayed. To use a function, drag-and-drop it on to the pipeline editor.



    You are now going to configure a Get File function for your pipeline.

  9. Drag-and-drop the Get File function on to the pipeline editor.
  10. Select the Get File function block on the editor to view the function Details and Properties. If the Properties pane is not displayed, click Open Properties.
  11. Enter /Phone_list.xlsx in the Get File Properties Path field. This is the relative path to where the .xlsx file is stored on Dropbox.
  12. Click Save.


  13. Click Run to test that your connection to the Excel file is working. Your output should complete with no errors. If you receive errors, please check that the path you entered is correct.



You have now successfully created a pipeline to get Phone_list.xlsx file from Dropbox! The next step is to read the data in the file.

 Importing data from the .xslx file to the SQL database through the pipeline

Now that you have successfully created a pipeline to the Phone_list.xlsx file on Dropbox, you can read and import the data in the file.
When you read a spreadsheet, you must specify the format of the spreadsheet, such as the name of the worksheet and range of cells that contain the data to read, and if the spreadsheet has a header row. 

 
To read the file:

  1. Ensure you have your pipeline open to the Editor tab.
  2. Enter Read in the filter.
  3. Click Connectors in the list.
  4. Drag-and-drop the Read Spreadsheet document connector on to the pipeline editor.
  5. Select the Read Spreadsheet document block on the editor to view the function Details and Properties.
  6. Enter the following:
    • Sheet Name: Phone
    • Has Header Row?: True
    • Range: A1:X15
    • Start Row: 1



    You now need to join the functions using Bind:

  7. Join the Get File function block to the Read Spreadsheet function block as shown in the image below. The line between function blocks shows the order they are executed in.
  8. Enter Bind in the filter.
  9. Click Universal in the list.
  10. Drag-and-drop the Bind workflow on to the pipeline editor.
  11. Join the functions as follows:



  12. Select the Bind workflow block on the editor to view the function Details and Properties.
  13. Select Files in the Bind Properties Get File and Read Spreadsheet lists.
  14. Click Save.



  15. Click Run to execute your program. The results are displayed in the Output pane for you to review.
  16. Click the Context tab on the Output pane.
  17. Expand the Read Spreadsheet list and select Data. The data output should look as follows:

    Note:This is a JSON representation of the data and is not in JSON format.



    Right, now that the pipeline is configured to read a .xlsx file, let's map the data in the file to the database table according to the relationship defined in the DataFormat_XLSX data format.

  18. Enter Map Data in the filter… field.
  19. Drag-and-drop the Map Data interface on to the pipeline editor.
  20. Enter Bind in the filter.
  21. Drag-and-drop two instances of the Bind workflow on to the pipeline editor.
  22. Enter Save in the filter.
  23. Click Connectors in the list.
  24. Drag-and-drop the Save database function on to the pipeline editor. You must ensure you select the save function for the database connection you created in the Module 1 of this tutorial, so for Database_UPDemo.
  25. Join the functions as follows:




    Looking at joining the functions as above, note that you created the first Bind function when you completed the Configuring the pipeline task. In this task you are working with the second and third Bind functions only. 

    So, starting with the second Bind function:

  26. Select the second Bind workflow block on the editor to view the function Details and Properties.
  27. Select Data in the Bind Properties Read Spreadsheet and Map Data lists.



  28. Select the Map Data interface block on the editor to view the function Details and Properties.
  29. Select the DataFormat_XLSX data format you created, from the Data Format list in the Properties section.
  30. Select True from the Ignore Field Casing? List. If enabled, this setting sets the system to ignore case-sensitive content checks when importing data.



  31. Select the third Bind workflow block on the editor to view the function Details and Properties.
  32. Select Data in the Bind Properties Map Data and Save lists.
  33. Click Save.




    Next you must add a save function that imports the read data to the database, according to the attributes defined in the DataModel_DataHub data model.

  34. Select the Save database function block on the editor to view the function Details and Properties.
  35. Select the DataModel_DataHub data model you created in Module 1 of this tutorial, from the Data Model list in the Properties section.
  36. Click Save.




  37. Click Run to execute your program. The results are displayed in the Output pane for you to review and should complete with no errors. If you receive errors, please check your pipeline configuration.


Next you must check that the data in the .xlsx file imported to your database table.

 Viewing the data imported to the database table

You view the data imported to your database table by going to Queries and running the table query that you created in Module 1 of this tutorial. 

To do this:

  1. Go to the browser tab that has Data Manager open.
  2. Select Queries. The Queries screen is displayed.
  3. Select the CheckTable_DataHub query you created in Module 1 of this tutorial, from the list.
  4. Select the Editor tab and ensure the following query is displayed: Select * from DataHub
  5. Click Run to execute your query and view the records imported from Phone_list.xlsx in your database table. You will see that the Desk_Phone field now contains data from the .xlsx file. You will also notice that Harmony's last name is changed from Osullivan to Smith and that the mobile numbers are prefixed with "0". This is because the data in Phone_list.xlsx is different to HR.csv.



Now let's see what happens if you import data from HR.csv. To do this, you must run the Pipeline_CSV you created in Module 1:

  1. Go to the browser tab that has Platform Manager open.
  2. Select Pipelines. The Pipelines screen is displayed.
  3. Click the Pipeline_CSV pipeline. The Pipeline Details screen is displayed.
  4. Select the Editor tab.
  5. Click Run to execute your program.

Now let's check the data in your database table again:

  1. Go to the browser tab that has Data Manager open and ensure the following query is displayed: Select * from DataHub
  2. Click Run to execute your query and view the records imported from HR.csv in your database table. You will see that the Desk_Phone field still contains data from the .xlsx file but that Harmony's last name has changed back to Osullivan and the mobile numbers are no longer prefixed with "0". This is because existing data in the database table is overwritten with the data you imported from HR.csv. The desk phone numbers do not change because this information does not exist in HR.csv.


If you run the pipelines and query again, you will see that the data continues to be overwritten by the last data source imported. This is known as data collision.
To avoid data collisions, you need to create Single Points of Truth (SPOTs). SPOTs specify the source file and data that has precedence over other data sources, based on specific fields in the data models.

 What you've done so far

 Let's have a quick look at what you have done so far:

 Creating SPOTs

Since there are now two files containing staff information that are required to be imported to the database, you must ensure that newer data in the database is not overwritten by older data. This applies specifically to the fact that the staff contact information Devan updates in Phone_list.xlsx is always more up-to-date than the contact information Harmony has in HR.csv.

As you have seen, importing data from both a .csv and .xlsx file to a database table causes data collisions where existing data is overwritten by data from the last imported file. To avoid this you must configure SPOTs.
A SPOT (Single Point of Truth) is a way of tagging a data model so that all data imported using the data model is given priority over data imported to the same database table, using another data model. So, when data is imported from multiple sources, SPOTs indicate the data source that has precedence over the other data sources.
SPOTs do not have any configuration properties and are simply tags in the Universal Platform system that are used to indicate data hierarchy.
You now have three data models defined in Universal Platform:

  • DataModel_CSV – this data model specifies the fields contained in the HR.csv file.
  • DataModel_DataHub – this data model specifies the fields contained in the DataHub database table.
  • DataModel_XLSX – this data model specifies the fields contained in the Phone_list.xlsx file.

As each of these models defines the fields and data types specific to their structure, you must configure a different SPOT for each model.
So, to successfully complete the tasks in this tutorial, you must configure the following SPOTs:

  • .csv – this SPOT enables you to tag data sourced from the HR.csv file.
  • .xlsx – this SPOT enables you to tag data sourced from the Phone_list.xlsx file.
  • Database table – this data model enables you to specify the fields contained in the DataHub database table.
 Creating a .csv table SPOT

To create a SPOT:

  1. Go to the browser tab that has Data Manager open.
  2. Select Spots. The Spots screen is displayed.
  3. Click Add.
  4. On the General tab, enter a name for the data model in the format: Spot_CSV.
  5. Configure the SPOT as follows:
    • Division: Universal Platform, or as defined when Universal Platform was installed.
    • Zone: Universal Platform, or as defined when Universal Platform was installed.

  6. Click Save & Close.

 Creating a .xslx file SPOT

Next you must add a SPOT for the Microsoft Excel spreadsheet. The procedure for adding this SPOT is the same as when you added Spot_CSV.

So:

  1. On the Spots screen, click Add.
  2. On the General tab, enter a name for the data model in the format: Spot_XLSX.
  3. Configure the SPOT as follows:
    • Division: Universal Platform, or as defined when Universal Platform was installed.
    • Zone: Universal Platform, or as defined when Universal Platform was installed.
  1. Click Save & Close.
 Creating a database table SPOT

Creating a database table SPOT

The last SPOT you must add is for the database table:

  1. On the Spots screen, click Add.
  2. On the General tab, enter a name for the data model in the format: Spot_DataHub.
  3. Configure the SPOT as follows:
    • Division: Universal Platform, or as defined when Universal Platform was installed.
    • Zone: Universal Platform, or as defined when Universal Platform was installed.

  4. Click Save & Close.
 Assigning SPOTs to data models

Now that you have created SPOTs, you must assign the SPOTs to your data models to tag them so that later you can specify the fields in your data models that have precedence over others.


 Assigning a SPOT to a .csv file data model

To assign a SPOT to a data model:

  1. Go to the browser tab that has Data Manager open.
  2. Select Data Models. The Data Models screen is displayed.
  3. Select the DataModel_CSV data model you created in Module 1 of this tutorial, from the list. The Data Model Details screen is displayed.
  4. Configure the data model as follows:
    • Spot: Spot_CSV

  5. Click Save & Close.

 Assigning a SPOT to a .xlsx file data model

 Next you must assign a SPOT to the Microsoft Excel spreadsheet data model. The procedure for assigning this is the same as when you configured the SPOT for DataModel_CSV.

So:

  1. On the Data Model screen, click Add.
  2. Select the DataModel_XLSX data model you created when you completed the Creating an .xlsx file data model task, from the list.
  3. Configure the data model as follows:
    • Spot: Spot_XLSX

  4. Click Save & Close
 Assigning a SPOT to a database table data model

 The last SPOT you must assign is for the database table:

  1. On the Data Model screen, click Add.
  2. Select the DataModel_DataHub data model you created in Module 1 of this tutorial, from the list.
  3. Configure the data model as follows:
    • Spot: Spot_DataHub
  1. Click Save & Close.


Now that you have configured SPOTs, let's see them in action.

 Assigning SPOTs to data formats

Now that you have created SPOTs and assigned them to data models, you must assign the SPOTs to your data formats. This lets you tag the data formats so that later you can specify the fields in your data models that have precedence over others.

 

 Assigning a SPOT to a .csv file data format

To assign a SPOT to a .csv data format:

  1. Go to the browser tab that has Interface Manager open.
  2. Select Data Formats. The Data Formats screen is displayed.
  3. Select the DataFormats_CSV data model you created in Module 1 of this tutorial, from the list. The Data Format Details screen is displayed.
  4. Configure the data format as follows:
    • Source System: Spot_CSV

  5. Click Save & Close.

 Assigning a SPOT to a .xlsx file data format

To assign a SPOT to your data format for the Microsoft Excel spreadsheet:

  1. Go to the browser tab that has Interface Manager open.
  2. Select Data Formats. The Data Formats screen is displayed.
  3. Select the DataFormat_XLSX data model you created previously, from the list.
  4. Configure the data model as follows:
    • Source System: Spot_XLSX

  5. Click Save & Close

 Configuring the points of truth for the SQL database

Since the staff contact information Devan updates in Phone_list.xlsx is always more up-to-date than the contact information Harmony has in HR.csv, you must configure the database data model to specify the fields in the source files that you want to use as the single points of truth, and set the source file hierarchy.

Devan is responsible for ensuring that all staff personal and company contact details are updated in the company global address book. This includes both mobile and desk telephone contact numbers. As this information takes precedence over the contact details Harmony has in the Phone_Number field, you must configure the Mobile_Number field from Phone_list.xlsx as a point of truth.
Harmony is responsible for ensuring that staff contractual information is always accurate, such as if a staff member gets married and changes their last name. As this information takes precedence over the staff details Devan has, you must configure the Last_Name field from HR.csv as a point of truth.
When you specify points of truth, you configure the following information:

  • SPOT field – this is the field in the database table that corresponds to the field in the data source that you want to make the point of truth.
  • Source attributes – this is the field from the data source that you want to make the point of truth.
  • Source sequence – this is the hierarchy of the data sources that indicates the source that takes precedence. The order that these are listed indicates the priority in descending order.

Since more than one record in the database may have the same value, such as two people having the same mobile number, the database table in this tutorial contains a corresponding "guid" field. For example, the value of Mobile_Number_guid is unique for a particular record, but the value for Mobile_Number may not be. To ensure the correct records are imported, you must configure the SPOT field to be the unique Mobile_Number_guid


To configure the points of truth from Phone_list.xlsx:

  1. Go to the browser tab that has Data Manager open.
  2. Select Data Models. The Data Models screen is displayed.
  3. Select the DataModel_DataHub model from the list.
  4. Select the Model tab.
  5. Select the DataHub entity.
  6. Select the Sources tab and add the following:
    • Mobile Numbers Note: Source names can contain spaces but cannot contain special characters, such as @ and '.

  7. Configure the source as follows:
    • SPOT Field: Mobile_Number_guid
    • Source Attributes: Mobile_Number
    • Source Sequence:
      1. Spot_XLSX
      2. Spot_CSV

    Tip: The sequence is defined in the order that you add each item. Once added, you can drag-and-drop the items in the list to re-order them.

  8. Click Save.



And now for the moment of truth! Let's see what happens when you import data from Phone_list.xlsx

To do this, you must run the Pipeline_XLSX:

  1. Go to the browser tab that has Platform Manager open.
  2. Select Pipelines. The Pipelines screen is displayed.
  3. Click the Pipeline_XLSX pipeline. The Pipeline Details screen is displayed.
  4. Select the Editor tab.
  5. Click Run to execute your program.


Now let's check the data in your database table:

  1. Go to the browser tab that has Data Manager open.
  2. Select Queries. The Queries screen is displayed.
  3. Select the CheckTable_DataHub query you created in Module 1 of this tutorial, from the list.
  4. Select the Editor tab and ensure the following query is displayed: 
    Select * from DataHub

  5. Click Run to execute your query and view the records imported from Phone_list.xlsx in your database table. You will see that the Desk_Phone field still contains data from the .xlsx file. You will also notice that Harmony's last name is Smith and that the mobile numbers are prefixed with "0".


Now let's import data from HR.csv:

  1. Go to the browser tab that has Platform Manager open.
  2. Select Pipelines. The Pipelines screen is displayed.
  3. Click the Pipeline_CSV pipeline. The Pipeline Details screen is displayed.
  4. Select the Editor tab.
  5. Click Run to execute your program.


Okay, let's check the data in your database table again:

  1. Go to the browser tab that has Data Manager open.
  2. Select Queries. The Queries screen is displayed.
  3. Select the CheckTable_DataHub query you created in Module 1 of this tutorial, from the list.
  4. Select the Editor tab and ensure the following query is displayed: Select * from DataHub
  5. Click Run to execute your query and view the records imported from HR.csv in your database table. You will see that the Desk_Phone field now contains no data. You will also notice that Harmony's last name is changed from Smith to Osullivan but that mobile numbers do not change.


If you run the pipelines and query again, you will see that the mobile phone numbers are always true according to Phone_list.xlsx and are no longer overwritten by the last data source imported. However, Harmony's last name changes with each import. This is because we have not set the staff last names in HR.csv as a point of truth. 
 

So, let's now configure the points of truth from HR.csv:

  1. Go to the browser tab that has Data Manager open.
  2. Select Data Models. The Data Models screen is displayed.
  3. Select the DataModel_DataHub model from the list.
  4. Select the Model tab.
  5. Select the DataHub entity.
  6. Select the Sources tab and add the following:
    • Last Name

      Note: Source names can contain spaces but cannot contain special characters, such as @ and '.

  7. Configure the source as follows:
    • SPOT Field: LName_guid
    • Source Attributes: LName
    • Source Sequence:
      1. Spot_CSV
      2. Spot_XLSX

    Tip: The sequence is defined in the order that you add each item. Once added, you can drag-and-drop the items in the list to re-order them.

  8. Click Save.


So now run your pipelines and queries again and compare the data updated in the database table.
You will see that Harmony's last name according to HR.csv is no longer overwritten by the data in Phone_list.xlsx. And that staff contact information in Phone_list.xlsx takes precedence over the information in HR.csv.

 And that's a wrap

 And that's a wrap - congratulations on completing this module! Before we finish, let's have a quick look at what you have done: 


This is the end of the tutorial – well done!

 



  • No labels