Apache Spark and Scala Training
Data Science Training with Apache Spark, an open source cluster computing system, is growing fast. Apache Spark has a growing ecosystem of libraries and framework to enable advanced data analytics. Apache Spark’s rapid success is due to its power and ease-of-use. It is more productive and has faster runtime than the typical MapReduce BigData based analytics. Apache Spark provides in-memory, distributed computing. It has APIs in Java, Scala, Python, and R. The Spark Ecosystem is shown below.
The entire ecosystem is built on top of the core engine. The core enables in-memory computation for speed and its API has support for Java, Scala, Python, and R. Streaming enables processing streams of data in real time.
The reason people are so interested in Apache Spark is it puts the power of Hadoop in the hands of developers. It is easier to set up an Apache Spark cluster than a Hadoop Cluster. It runs faster. And it is a lot easier to program. It puts the promise and power of Big Data and real-time analysis in the hands of the masses.
- Data Scientist
- Data Analyst
- Project managers
MODULE 1 : INTRODUCTION OF SCALA
- Introducing Scala and deployment of Scala for Big Data applications and Apache Spark analytics
MODULE 2 : PATTERN MATCHING
- The importance of Scala
- The concept of REPL (Read Evaluate Print Loop)
- Deep dive into Scala pattern matching, type interface, higher-order function, currying, traits, application space and Scala for data analysis.
MODULE 3 : EXECUTING THE SCALA CODE
- Learning about the Scala Interpreter
- Static object timer in Scala
- Testing String equality in Scala
- Implicit classes in Scala
- The concept of currying in Scala
- Various classes in Scala.
MODULE 4 : THE CLASSES CONCEPT IN SCALA
- Learning about the Classes concept
- Understanding the constructor overloading
- The various abstract classes
- The hierarchy types in Scala
- The concept of object equality
- The Val and var methods in Scala.
MODULE 5 : CASE CLASSES AND PATTERN MATCHING
- Understanding Sealed traits, wild, constructor, tuple, variable pattern, and constant pattern.
MODULE 6 : CONCEPTS OF TRAITS WITH AN EXAMPLE
- Understanding traits in Scala
- The advantages of traits
- Linearization of traits
- The Java equivalent
- Avoiding of boilerplate code.
MODULE 7 : SCALA JAVA INTEROPERABILITY ?
- Implementation of traits in Scala and Java
- Handling of multiple traits extending.
MODULE 8 : SCALA COLLECTIONS
- Introduction to Scala collections
- Classification of collections
- The difference between Iterator, and Iterable in Scala,
- Example of list sequence in Scala.
MODULE 9 : MUTABLE COLLECTIONS VS. IMMUTABLE COLLECTIONS
MODULE 10 : USE CASE BOBSROCKETS PACKAGE
- Introduction to Scala packages and imports
- The selective imports
- The Scala test classes
- Introduction to JUnit test class
- JUnit interface via JUnit 3 suite for Scala test, p
- The packaging of Scala applications in Directory Structure
- Example of Spark Split and Spark Scala.
MODULE 11 : SPARK FRAMEWORK COMPARING SCALA
- Detailed Apache Spark, its various features
- Comparing with Hadoop
- The various Spark components
- Combining HDFS with Spark
MODULE 12 : RDD IN SPARK USING SCALA
- The RDD operation in Spark
- The Spark transformations
- Actions, data loading
- Comparing with MapReduce
- Key Value Pair.
MODULE 13 : DATA FRAMES AND SPARK SQL USING SCALA
- The detailed Spark SQL
- The significance of SQL in Spark for working with structured data processing
- Spark SQL JSON support
- Working with XML data, and parquet files
- Creating HiveContext
- Writing Data Frame to Hive
- Reading of JDBC files
- The importance of Data Frames in Spark
- Creating Data Frames
- Schema manual inferring
- Working with CSV files
- Reading of JDBC tables
- Converting from Data Frame to JDBC
- The user-defined functions in Spark SQL
- Shared variable and accumulators
- How to query and transform data in Data Frames
- How Data Frame provides the benefits of both Spark RDD and Spark SQL
- Deploying Hive on Spark as the execution engine.
MODULE 14 : MACHINE LEARNING USING SPARK (MLIB) USING SCALA
- Different Algorithms
- The concept of an iterative algorithm in Spark
- Analyzing with Spark graph processing
- Introduction to K-Means and machine learning
- Various variables in Spark like shared variables, broadcast variables
- Learning about accumulators.
MODULE 15 : SPARK STREAMING USING SCALA
- Introduction to Spark streaming
- The architecture of Spark Streaming
- Working with the Spark streaming program
- Processing data using Spark streaming
- Requesting count and Dstream
- Multi-batch and sliding window operations
- Working with advanced data sources.
MODULE 16 : DATA SCIENCE TRAINING OUTCOME
- The participant will be familiar with
- Spark development using RDD
- Spark development using the d ata frame
- Spark development using streaming
- Spark development using mllib
- Spark development using Scala.
- Lectures 0
- Quizzes 0
- Duration 50 hours
- Skill level All levels
- Language English
- Students 0
- Assessments Yes