/
Data Factory

Data Factory

  1. What’s the purpose of data pipelines ?

    To extract, transform and load data from one or more sources to a destination.

  2. What’s the role of pipelines in ETL processes ?

    To automate the integration of transactional data into an analytical data store.

  3. What information is available in the pipeline run history ?

    We can see start and end times, duration, how each run was triggered and current status.

  4. How can pipelines be executed in Fabric ?

    Pipelines can be created interactively in the user interface or scheduled to run automatically.

  5. When should we use the Copy Data activity ?

    When we need to copy data directly without transformations or when we want to import raw data and apply transformations in later pipeline activities.

  6. Where can you view the run history for a pipeline ?

    We can view it from the pipeline canvas or from the pipeline item listed on the workspace page.