Building Big Data Pipelines with PySpark + MongoDB + Bokeh

Building Big Data Pipelines with PySpark + MongoDB + Bokeh

Building Big Data Pipelines with PySpark + MongoDB + Bokeh

Build intelligent data pipelines with big data processing and machine learning technologies

Language: english

Note: 4.7/5 (35 notes) 366 students

Instructor(s): EBISYS R&D

Building Big Data Pipelines with PySpark + MongoDB + Bokeh

 

Description

Welcome to the ​Building Big Data Pipelines with PySpark & MongoDB & Bokeh​ course. In

this course we will be building an intelligent data pipeline using big data technologies like

Apache Spark and MongoDB.


We will be building an ETLP pipeline, ETLP stands for Extract Transform Load and Predict.

These are the different stages of the data pipeline that our data has to go through in order for it

to become useful at the end. Once the data has gone through this pipeline we will be able to

use it for building reports and dashboards for data analysis.


The data pipeline that we will build will comprise of data processing using PySpark, Predictive

modelling using Spark’s MLlib machine learning library, and data analysis using MongoDB and

Bokeh.


  • You will learn how to create data processing pipelines using PySpark

  • You will learn machine learning with geospatial data using the Spark MLlib library

  • You will learn data analysis using PySpark, MongoDB and Bokeh, inside of jupyter notebook

  • You will learn how to manipulate, clean and transform data using PySpark dataframes

  • You will learn basic Geo mapping

  • You will learn how to create dashboards

  • You will also learn how to create a lightweight server to serve Bokeh dashboards


Building Big Data Pipelines with PySpark + MongoDB + Bokeh

 

Time remaining or 919 remaining registrations

 

Don’t miss any coupons by joining our Telegram channel 

Udemy Coupon Code 100% off | Udemy Free Course | Udemy offer | Course with certificate