Description
The Import from Spark Schema option can be used to automatically set up a JSON, Avro, or Parquet Project in GenRocket. This option can be used to generate simple and nested JSON, Avro, or Parquet files.
Note: The steps for this import are the same when choosing JSON, Avro, or Parquet from the drop-down menu.
Sample Spark Schema
root |-- factory: struct (nullable = true) | |-- id: integer (nullable = true) | |-- name: string (nullable = true) | |-- type: string (nullable = true) | |-- image: struct (nullable = true) | | |-- height: integer (nullable = true) | | |-- url: string (nullable = true) | | |-- width: integer (nullable = true) | |-- thumbnail: struct (nullable = true) | | |-- height: integer (nullable = true) | | |-- url: string (nullable = true) | | |-- width: integer (nullable = true)
How to import from a Spark Schema
- Select a Project with the Project Dashboard.
- Expand the Domain menu and select Import from Spark Schema.
- Browse to and select a file to import by clicking on Choose File.
- Next, choose an output format type (JSON, AVRO, or PARQUET) from the drop-down menu. For this example, we will select AVRO.
- Click the Save button.
- Click OK to close the dialog window.
- The data model will be imported into the Test Data Project with Domains, Attributes, Receivers, Scenarios, and a Scenario Chain set up accordingly.
- This process may take a few minutes, and the user who initiated the process will receive an email when it has finished.
- Additionally, a Configuration File will be automatically created and will need to be downloaded along with the Scenario Chain to generate test data.