Get Instant Help From 5000+ Experts For
question

Writing: Get your essay and assignment written from scratch by PhD expert

Rewriting: Paraphrase or rewrite your friend's essay with similar meaning at reduced cost

Editing:Proofread your work by experts and improve grade at Lowest cost

And Improve Your Grades
myassignmenthelp.com
loader
Phone no. Missing!

Enter phone no. to receive critical updates and urgent messages !

Attach file

Error goes here

Files Missing!

Please upload all relevant files for quick & complete assistance.

Guaranteed Higher Grade!
Free Quote
wave
MBIS623-20S1 Data Management

1.AssignmentOverviewFor thesecondMBIS623assignmentyou are required to develop a set of data integration and data cleansing procedures for merging two datasets:1.The first dataset is an extract from Land and Information New Zealand (LINZ) database listing all addresses in Christchurch. This dataset comes in a flat table format with a hefty redundancy overhead, and other issues.2.The second dataset is list of Stronger Christchurch Infrastructure Rebuild Team (SCIRT) repair jobs including the descriptions of those jobs and route/street assignments related to each job.This dataset also comes with a lot of data duplication, and other inefficiencies.You job is to write a series of SQL statements correcting the problems with the datasets, and eventually transforming the data into one integrated database where redundancy is eliminated and the route/street-to-repair-job assignment is handled in an efficient manner, using associative tables and one-to-many relationships.2.Resources and ProcessThe primary resource is what is called an SQL database dump which can be imported to create a database schema called scirt_jobs_bound. Please download the compressed database dump provided alongside this assignment specification. The data in this database contains the disjoint datasets listed above. In addition to those datasets there is a stored procedure and a function which split a string of comma-separated routes in the scirt_job table and store the results in a temporary table for further processing.It is your job tocreate a suitable data model for the integrated dataset, and write the extract-transform-load (ETL) SQL statements to populate the new schema with the data from the two original datasets.You may use any online resources to learn more about MySQL, MySQL stored procedures and functions, with the following two recommended sources:As you develop your data model and the ETL script, keep in mind the followingrequirements:?Provide comments to explain each step in the script,?Use appropriate naming conventions –use of single vs. plural names for new entities,
Remember the requirements of a relation in regards to uniqueness of objects in a relation,?Eliminate data duplication through table decomposition,?Consider the meaning and appropriateness of using NULL values vs. empty string values.Start by launching MySQL Workbench, opening the local server connection, and then selecting the “Run SQL Script...” menu item from the “File” menu: this will create the schema and import the data, along with the stored procedures and funct

support
close